eindhoven university of technology master monitoring … · monitoring assumption validity based on...

108
Eindhoven University of Technology MASTER Monitoring assumption validity based on field observations of corrective replacements Penders, M.M.J.H. Award date: 2016 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

Upload: others

Post on 25-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

Eindhoven University of Technology

MASTER

Monitoring assumption validity based on field observations of corrective replacements

Penders, M.M.J.H.

Award date:2016

Link to publication

DisclaimerThis document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Studenttheses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the documentas presented in the repository. The required complexity or quality of research of student theses may vary by program, and the requiredminimum study period may vary in duration.

General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright ownersand it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

Page 2: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

Monitoring assumption validitybased on field observationsof corrective replacementsbyMiriam Penders

Eindhoven, March 2016

Student identity number 0715068

Master Thesisin partial fulfilment of the requirements for the degree of

Master of Sciencein Operations Management and Logistics

SupervisorsProf.dr.ir. G.J. van Houtum, Eindhoven University of TechnologyDr. S. Kapodistria, Eindhoven University of TechnologyIr. W. Fleuren, NedTrainIr. B. Huisman, NedTrain

Page 3: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

Subject headings: Assumption Validity, Field Reliability, Age Replacement, Block Re-placement, Weibull Distribution, Hypothesis Test, Renewal Process, Statistical Estima-tion

i

TUE. School of Industrial Engineering.Series Master Theses Operations Management and Logistics

Page 4: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

Abstract

In this research, models are developed to determine an approximate probability distribu-tion of the cumulative number of corrective replacements of identical components up topre-specified time instants, under repair by replacement maintenance. All identical compo-nents positioned in trains within a train series are considered, which start running servicesin several batches. The models apply to components with exponential and Weibull dis-tributed lifetime distributions, under a failure based, block replacement or age replacementmaintenance rule. Usage of renewal processes, statistical estimation techniques, and nu-merical evaluation methods allows us to obtain an approximate probability distribution foreach combination. Particularly accurate are the mean and variance of these approximateprobability distributions.

These probability distributions are used in a hypothesis test to monitor the validity of as-sumptions about failure behaviour made during the design phase of maintenance programs.The probability distributions are effective for this application.

ii

Page 5: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

Executive Summary

Introduction

This research is conducted at NedTrain. NedTrain maintains trains and is part of the DutchRailways company (NS). NedTrain’s maintenance programs are designed based on assump-tions about failure behaviour of components. The validity of these assumptions is unknown,because it is unknown whether components in the field behave as expected during the phasein which maintenance programs are designed. Monitoring the validity is relevant in order tomeet requirements of national and international governments, and meet performance agree-ments made with the asset owner. These goals are accomplished by monitoring the validityof assumptions, because the risk and performance implied by the maintenance program arecontrolled. This research develops a methodology to assess the validity of assumptions aboutfailure behaviour of components.

Conclusion

The methodology developed is a hypothesis test. This hypothesis test determines whetherthe observed cumulative number of corrective replacements of the component is as expectedbased on (I) the assumption about the failure behaviour of the component, and (II) themaintenance rule applied. Note that the maintenance activity considered is replacement.An accurate and fast way to determine the expected value and variance of the cumulativenumber of corrective replacements is developed for four scenarios. These scenarios rep-resent situations at NedTrain and are defined as follows: (I) assumption that the arrivalprocess of failures is a Poisson process; (II) assumption that the lifetime of a componentfollows a Weibull distribution, and failure based maintenance is applied; (III) assumptionthat the lifetime of a component follows a Weibull distribution, and block replacement isapplied; (IV) assumption that the lifetime of a component follows a Weibull distribution,and age replacement is applied. By using this expected value and variance, and a normalapproximation, a lower and/or upper bound can be determined for the hypothesis test. Ifthe observed cumulative number of corrective replacements lies outside these bounds, thevalidity of the assumption is questionable and investigation is required.

Contribution

Firstly, the designed methodology is a contribution for NedTrain. NedTrain can use thismethod to monitor the validity of assumptions on a large scale, as (I) the method is easyand fast to execute and to interpret, and (II) applicable to many components. The appro-priateness of the hypothesis test for data at NedTrain is demonstrated by a case study.

iii

Page 6: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

EXECUTIVE SUMMARY iv

Secondly, the designed methodology is a contribution to literature for three reasons. Firstly,the derivation of a hypothesis test on a time varying parameter using two types of centrallimit theorems, is novel. Moreover, the theoretical framework created in this report can beeasily and directly generalised to other components as well as to other lifetime distributions,besides the exponential and the Weibull distribution. Lastly, we would like to note that thiswork makes an interesting contribution in the field of maintenance, by solving a practicaland very applied problem.

Page 7: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

Preface

This report is a result of my graduation project in completion of the master Operations,Management and Logistics, which I conducted in collaboration between Eindhoven Univer-sity of Technology and NedTrain. Firstly, I would like to express my gratitude to a numberof people who had a role in this project.

From university I would like to express my gratitude to Geert-Jan van Houtum and StellaKapodistria. Geert-Jan, I am thankful for enabling me to learn about my ambitions andmyself during my master. In our more intensive collaboration during this graduation projectour helpful meetings made me extra critical to my work. Stella, I appreciate your directcommunication and willingness to supervise my graduation project at the Industrial En-gineering department. Our meetings helped a lot in my understanding of renewal processes.

From NedTrain I would like to thank Wouter Fleuren for being my company supervisor.Your critical and positive guidance was very valuable, both during our weekly meetingsand the random moments at which I could walk by to express my thoughts, happiness anddifficulties. Bob Huisman, thanks for giving me the opportunity to execute this project, andfor coaching me. Furthermore, I would like to thank everyone who I have met during thepast six months. In the meetings we had considering my project you provided a lot of inputand gave me valuable feedback. Moreover, the lunches and coffees with the MaintenanceDevelopment team and fellow interns made me feel very comfortable and welcome.

Secondly, this graduation project concludes my student life. In 6.5 amazing years I learntincredibly much from both my studies and from meeting and collaborating with colleaguesand fellow students of Industrial Engineering, my sorority, the study association, and Insti-tuto Technologico Buenos Aires. On top of that, we shared a lot of memorable moments,which I hope we can continue to create and exceed. Besides the moments with friends frommy student life, the dates with friends from home were always very welcome distractions.

I would not have been able to complete my studies without the support of my parents, Josand Tiny, and sister, Esther. I appreciate the warm attention wherever in The Netherlandsor abroad. Especially given the fact I was often either studying or sleeping when I wasback home. Also you had to adjust your expectations about my graduation moment quitea couple of times, yet supported me unconditionally in the choices I made. Finally, I wouldlike to thank my boyfriend Bas. After living apart in all combinations of time zones for ayear, I am happy we could share our graduation time in Eindhoven. You were always thereto listen to my little victories and struggles, and to help me to think ahead, put things inperspective and stay positive. Thank you!

v

Page 8: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

Contents

Abstract ii

Executive Summary iii

Preface v

1 Introduction 11.1 Problem context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Report outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Design of hypothesis test 82.1 Parameter of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Maintenance rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4 Steps in hypothesis test design . . . . . . . . . . . . . . . . . . . . . . . . . 122.5 List of variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3 Hypothesis test for scenario 1 173.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.2 Overview of assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.4 Lower and upper bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.5 Solving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213.7 Discussion on performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Hypothesis test for scenario 2 254.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.2 Overview of assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.4 Lower and upper bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.5 Numerical evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.7 Discussion on performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5 Hypothesis test for scenario 3 35

vi

Page 9: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CONTENTS vii

5.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355.2 Overview of assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.4 Lower and upper bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.5 Numerical evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.7 Discussion on performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6 Hypothesis test for scenario 4 446.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446.2 Overview of assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466.4 Lower and upperbound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476.5 Numerical evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496.7 Discussion on performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

7 Case study: Cab control panel 537.1 Description of component . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537.3 Hypothesis test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547.4 Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

8 Implementation 568.1 Ideal situation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568.2 Implementation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

9 Conclusions & Recommendations 589.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599.3 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

A List of definitions 61

B Overview of notation 64

C Analysis fleet of NedTrain 66C.1 Fleet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66C.2 Number of components per train . . . . . . . . . . . . . . . . . . . . . . . . 67

D Degradation Model 68D.1 Replacements and system performance . . . . . . . . . . . . . . . . . . . . . 68D.2 Lifetime distribution and degradation mechanisms . . . . . . . . . . . . . . 68D.3 Degradation mechanisms and influencers . . . . . . . . . . . . . . . . . . . . 69D.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

E Theoretical background 71E.1 Hypothesis testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Page 10: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CONTENTS viii

E.2 Definition of a renewal process . . . . . . . . . . . . . . . . . . . . . . . . . 75E.3 List of theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

F Mathematical derivations 76F.1 First and second moment renewal process . . . . . . . . . . . . . . . . . . . 76F.2 First and second moment age replacement . . . . . . . . . . . . . . . . . . . 77

G Matlab implementation 78G.1 Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78G.2 Number of replications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78G.3 Verification simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82G.4 Determination of significance level and power . . . . . . . . . . . . . . . . . 83G.5 Verification of Matlab implementation of hypothesis tests . . . . . . . . . . 84

H Performance tables 86H.1 Scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86H.2 Scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88H.3 Scenario 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90H.4 Scenario 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

I Case study background 94I.1 Batches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94I.2 Lower and upper bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94I.3 Application of in-depth analysis tool . . . . . . . . . . . . . . . . . . . . . . 96

Bibliography 97

Page 11: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

1

Introduction

The objective of this chapter is to introduce the research problem of this master thesis, bothat NedTrain and in literature. Firstly, the problem context will be introduced in section1.1. Secondly, in section 1.2 an overview will be given of the available literature to addressthis type of research problem, and of the literature which is applied in this master thesis.Thirdly, in section 1.3 the outline of the report will be presented.

1.1 Problem context

1.1.1 Company introduction

The project is initiated by NedTrain. NedTrain maintains trains and is part of the DutchRailways company (NS). Maintenance refers to all those activities intended to keep or re-turn a capital good to a condition required to fulfil its function, see Smit [20]. The maincustomer of NedTrain is ‘NS Reizigers’, which owns a fleet of about nine different traintypes. More details considering the fleet of NedTrain are given in appendix C.

In short, NedTrain conducts maintenance to ensure that:• The fleet can fulfil its functions;• Legal requirements of national and international governments are met;• The performance agreements made with the asset owner in relation to safety, seat

availability, punctuality, cleanness, and durability are accomplished;• Financial contractual agreements made with the asset owner, e.g. NSR, are met.

enterAccording to Van Dongen [8], a multiple of the investment amount in capital goods is spenton maintaining them. This makes it important to properly plan maintenance activities.At NedTrain the maintenance activities are executed according to a schedule, which is pre-sented in a maintenance program per train type. The maintenance program is the documentthat specifies when and which maintenance activities have to be carried out during the lifecycle of a train.

At NedTrain, the approach to designing and managing a maintenance program is Perfor-mance Centered Maintenance, which focuses on optimising the performance of the train.Optimal performance is accomplished by ensuring a required safety level, and balancing re-liability, availability, costs of repair, quality and image, and environment. NedTrain wantsto design, control, and enhance its maintenance programs as to optimise the performancein these different areas. The performance is influenced by numerous factors. A disturbancefor example, which influences several performance areas, can be attributed to numerous

1

Page 12: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 1. INTRODUCTION 2

factors, including design of the train, operation, quality of parts, quality of maintenanceexecution, or maintenance program.

1.1.2 Field reliability

In order to properly plan the maintenance activities, one must understand the behaviour ofthe train. In this project we consider behaviour in terms of number of failures. As reliabilityis the likelihood of a technical failure of an element in a defined operating period, cf. Smit[20], we call this reliability. The reliability of a system can be assessed in different stagesof a system’s lifetime, see Blischke et al [5]. One of those stages refers to when a system isin operation, of which the reliability is referred to as field reliability. Field reliability is ofinterest in this thesis. A train is said to be in operation as soon as it starts running services.

To understand the behaviour of the train, one can decompose this system, and then try tounderstand the behaviour of its elements and the resulting total behaviour. We considerthe following decomposition:

• System: a combination of mutually-dependent subsystems, capable of fulfilling speci-fied functions in a given environment, e.g. a train;

• Subsystem: a combination of elements that fulfils specified functions, as part of therealisation of one or more functions at system level, e.g. passenger information system;

• Component : an exchangeable element of a subsystem that fulfils specific sub functions,as a contribution to realising the functions of one or more subsystems, e.g. cab controlpanel. At NedTrain a component is often called a line replaceable unit. The physicalrealisation of a component, is called a part.

enterAs can be seen in figure 1.1, the system’s reliability is determined by the reliability of itssubsystems and similarly the subsystem’s reliability is determined by the reliability, i.e. fail-ures, of its components. The degree to which the reliability is influenced by the subsystemor component depends on its criticality, i.e. the severity of the effect of a failure.

A failure is a result of a component reaching the end of its lifetime. This lifetime is a resultof degradation mechanisms, which are influenced by numerous factors, as can be seen infigure 1.1. These factors are further explained in appendix D. The figure is presented inthis section, because it is important to realise that a failure of a component is influencedby multiple mechanisms. Moreover, this figure helps to decide on the scope.

Let us explain how maintenance activities influence the field reliability, based on figure1.1. Firstly, the lifetime of a part is influenced by the maintenance during the life cycleof a part. This type of maintenance is for example cleaning or greasing a part. Secondly,we consider replacements of parts. If a failure occurs, the failed part is taken out of thesystem, and subsequently a new part is put in place at the same position. When applyingfailure based maintenance, parts are only replaced after they have failed, i.e. a correctivemaintenance strategy. If a failure influences the system performance more than desired,a preventive maintenance activity is prescribed, e.g. because of losses for repair costs orsafety. A preventive maintenance strategy aims to prevent failures by replacing or repairingthe parts before they fail, cf. Tinga [22]. Thus, maintenance activities influence how manyfailures occur.

Page 13: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 1. INTRODUCTION 3

Figure 1.1 – Influencing factors field reliability

A preventive maintenance strategy can be usage based or condition based. Under a us-age based maintenance strategy, the total usage of a part is measured and maintenance isconducted when a certain threshold level has been reached, see Arts [4]. Usage can forexample be measured in time or kilometrage in the field. Under a condition based mainte-nance strategy, the condition of a part is measured and maintenance is conducted when acertain threshold level has been reached. Under both maintenance strategies, a correctivemaintenance activity is executed, if a part fails before the threshold level is reached. Thespecific guideline for setting the moment of execution of a maintenance activity is given ina maintenance rule.

The maintenance rules have to be set in a way that maximises the performance of thesystem. Ideally, the rule is based on knowledge of the exact relationships between theinfluencing factors and the failure behaviour of a component. However, these are oftenuncertain. Hence, to design a maintenance rule, experts often make assumptions about thefailure behaviour of a component. By choosing the maintenance rule, a certain reliabilityand performance is expected.

1.1.3 Problem statement

The problem considered in this master thesis is the uncertainty about the validity of as-sumptions about the failure behaviour of a component. Importance of checking assumptionvalidity is acknowledged in literature, e.g. by Ebeling [9], Papadimitriou et al [17], and Smit[20]. Viewing whether the subject at hand works as assumed is called monitoring. In thissection, we explain the relevance for NedTrain of monitoring the validity of the assumptionsabout failure behaviour. This is done by relating two main motives with the objectives ofNedTrain as introduced in subsection 1.1.1.

Page 14: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 1. INTRODUCTION 4

The following motives describe why monitoring assumption validity is relevant for NedTrain:

Check whether the risk is not higher than expected. Firstly, this is relevant becauseit ensures that legal requirements are met. National and international governments re-quire explicit substantiation of the maintenance program. More specifically, the Dutchgovernmental transport body (Inspectie Leefomgeving en Transport (IL&T)) and theEuropean department responsible for the development of European Standards in thefield of railways (CEN/TC 256) demands to verify the expectations about the riskimplied by the maintenance program executed. Secondly, NedTrain wants to checkwhether the risk implied by the maintenance program is not higher than expected, tomeet the performance agreements made with the asset owner.

Check whether the performance is as expected. This is relevant because it ensuresthat the performance agreements and financial agreements made with the asset ownerare met. If the performance of components is worse than promised by suppliers, sup-pliers can be held financially responsible for unacceptable performance of components.Moreover, if the performance is better, NedTrain can alter its maintenance programto financially benefit. This is particularly relevant because the initially prescribedmaintenance program is expected to be risk averse, i.e. the maintenance program isexpected to lead to overmaintenance: the service life of parts is only partially utilised,see Tinga [22], and the downtime of the train is unnecessarily high.

The existing maintenance engineering process at NedTrain falls short for three reasons.Firstly, checks are initiated on fleet level, while initial maintenance decisions are made oncomponent level. Secondly, maintenance programs are only revised when targets are un-met, not if targets are amply met. Thirdly, no methodology is available for monitoring thevalidity of assumptions about the failure behaviour of components.

Monitoring the validity of assumptions about the failure behaviour of components is part of amonitoring and enhancement process. This project contributes to this process by addressingthe doubt about whether the performance of components in the field is as expected. Thistopic is chosen, because it is the starting point for monitoring and enhancing maintenanceprograms. It should be the trigger for an investigation and/or enhancement process.

1.1.4 Research questions

This research develops a methodology to assess the validity of assumptions about the failurebehaviour of components made by NedTrain. The field observations considered to assessthis validity are the number of preventive and corrective replacements. The focus of theproject is to check whether the risk is not higher than expected.

This results in the following main research question:

How should the validity of initial assumptions about the failure behaviour of components bemonitored based on field observations of corrective and preventive replacements?

To answer our main research question, the following questions have to be answered:1. Which initial assumptions about the failure behaviour of components are made, of

which the validity can be checked based on field observations of corrective and pre-ventive replacements?

Page 15: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 1. INTRODUCTION 5

2. What is a relevant performance indicator to measure the validity?3. What is the plausibility of an observed value of the chosen performance indicator,

given the initial assumption?4. How should monitoring the validity of assumptions about the failure behaviour of

components be executed at NedTrain?

1.1.5 Scope

The research questions as introduced in the previous subsection imply some scoping deci-sions. In this section, we define and justify the scope explicitly.

Field reliability can be assessed on different levels of the technical system, e.g. subsystemor component level. We are considering field reliability of components, since maintenancerules are based on assumptions about failure behaviour of components, as explained insubsection 1.1.2. On component level, failure behaviour can be checked in terms of eitherlifetime distributions, or degradation mechanisms.

By considering field data of the number of corrective and preventive replacements, we chooseto check the failure behaviour in terms of lifetime distributions. This choice makes the de-signed methodology easy to use in working practice, since the number of corrective andpreventive replacements is always known. Thus, this is a logical trigger in finding whatcauses a deviation, and act accordingly. However, field data about mapped degradationmechanisms would give more in-depth knowledge on failure behaviour of a component.This results in more understanding of system behaviour. However, obtaining field data ofmapped degradation mechanisms would require expensive research.

Therefore, the choice to consider field data of the number of corrective and preventive re-placements implies some scoping decisions. First of all, a top-down approach is followed,since the lifetime of a component is a high-level observation of the reliability of a component.We do not aim to investigate the underlying (e.g. degradation mechanism) or influencing(e.g. load on component) processes. Secondly, we choose to consider components under fail-ure based and usage based maintenance rules, because these maintenance rules are basedon expected lifetime distributions. Thirdly, we consider maintenance activities being re-placements, i.e. both a failure of a component and reaching the usage based threshold levelshould result in a replacement.

1.2 Literature review

As mentioned earlier, academic literature acknowledges the relevance of paying attentionto field reliability. However, in this section we advocate that the existing literature is notof sufficient practical use to solve the business problem, because of:

• The lack of lifetime data of components in practice;• The interest in considering practically relevant deviations.

enterFurthermore, in subsection 1.2.4 an overview is given of the methods found in literatureused to design the theoretical part of the methodology.

Page 16: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 1. INTRODUCTION 6

1.2.1 Goodness-of-fit tests

In maintenance models used for designing a maintenance program, usually theoretical dis-tribution functions are used to model the lifetime of a component. Based on an assumptionof this lifetime distribution and its parameters, a maintenance rule for the replacement of acomponent is set, as explained in section 1.1. If corrective and preventive replacements arerecorded per component, the time to failure and the failure-free period are known respec-tively. These can be used in goodness-of-fit tests designed for censored or truncated data.If a test indicates a good fit, the assumed theoretical distribution correctly describes thelifetime distribution of a component.

These tests only partly address the business problem. Firstly, the practical relevance of abad or good fit are not indicated: for a company a deviation from the expected distributionis only relevant, if the result affects the reliability of a technical system. Moreover, often therealised lifetime per component is unknown, or determining this requires in-depth analysis.

1.2.2 Estimation methods

Another method to determine whether field observations deviate from the expected, is esti-mating parameters of a theoretical distribution based on the observations, and determiningwhether those estimated parameters deviate from the expected values of these parameters.A book considering warranty data analysis of Blischke et al [5] applies estimation techniquesto estimate field reliability of systems that are used by customers. Warranty terms apply tothese systems, which gives truncated failure data, as we only register a failure if the productfails within the warranty term.

One could argue that the methodologies used in this book can be applied in this field.However, if no data about the realised lifetime per component is available, no parametricestimation methods are available. As theoretical distributions are used in setting assump-tions at NedTrain, this method is not applicable to the problem of interest.

1.2.3 Statistical surveillance

Statistical surveillance aims to timely detect important changes in the process that gener-ates data. An example is the usage of statistical surveillance at birth. The electrical signalof the heart of the baby is the base of the surveillance system. Detection of an importantchange has to be made as soon as possible, to ensure that the baby is delivered withoutbrain damage, see Frisen [11]. Different applications of statistical surveillance are found.The first versions were developed by introducing control charts for industrial applications.In the field of economics, applications are found in detection of turning points in businesscycles, cf. Frisen [11].

In the maintenance field, methods are available for monitoring the lifetime of a system, bothin the case of an exponential distribution and in the case of one parameter of the Weibulldistribution. However, again only monitoring based on lifetime data per component isconsidered.

Page 17: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 1. INTRODUCTION 7

1.2.4 Methodologies

In order to design the monitoring methodology on component level, we combine meth-ods available in the research fields of statistics and stochastic operations. These methodsare shortly introduced in this section. Relevant theoretical background knowledge is sum-marised in appendix E.

In statistical inference a sample of data is used to draw inferences about some aspect of thepopulation from which the data were taken, see Garthwaite et al [12]. The statistical infer-ence method used in this thesis is hypothesis testing. We use hypothesis testing to indicatewhether the failure behaviour in the field deviates from the expected. A hypothesis test setsup a specific statement regarding a parameter of interest, and assesses the plausibility ofthe hypothesis by deciding whether the observed data support or refute that hypothesis, cf.Garthwaite et al [12]. In figure 1.2, the elements of a hypothesis test are shown.

Figure 1.2 – Elements of hypothesis test

The parameter of interest in this thesis is directly related to the expected number of cor-rective and preventive replacements of a certain component during a pre-specified timeinterval. Thus, in our case, the parameter of interest is time-dependent. To this end, wewould like to design a hypothesis test that reflects this time-dependency. Therefore, a re-newal process is considered to determine the tested value of the hypothesis test. A renewalprocess is a counting process in which the times between successive events are independentand identically distributed, with an arbitrary distribution, see Ross [18].

The hypothesis is accepted or rejected based on a lower and/or upper bound for the observedvalue of the parameter of interest. These bounds depend on the distribution characteristicsof the parameter of interest. To determine these distribution characteristics we use statis-tical estimation techniques related to the renewal processes. To determine the bounds fornumerical examples, we use numerical evaluation methods, see Tijms [21]. A Monte Carlosimulation should reveal the correctness of the approximate expected values and bounds.Monte Carlo simulation is based on running a model of the real system many times as inrandom sampling, cf. Thomopoulos [23].

1.3 Report outline

The remainder of this report is structured as follows. In chapter 2, the general design of thehypothesis test is explained, which results in the definition of four scenarios. In chapters 3 to6, we evaluate the models for the four scenarios. The hypothesis test is executed for a casestudy in chapter 7. Next, in chapter 8 suggestions for implementation of the methodologyat NedTrain are presented. In chapter 9, the conclusion of the research is formulated, thecontribution for NedTrain and literature is made explicit, and further recommendations aregiven.

Page 18: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

2

Design of hypothesis test

To indicate whether the failure behaviour in the field deviates from the expected, a hypoth-esis test is designed. In a hypothesis test, the parameter of interest and the tested value ofthe parameter of interest have to be determined. The analysis to determine the parameterof interest will be given in section 2.1. In section 2.2, we will explain how the maintenancerules influence the parameter of interest. In section 2.3, four different scenarios are definedto determine the tested value of the parameter of interest. In section 2.4, we will describethe general steps undertaken in the design of the hypothesis test. Finally, in section 2.5 anoverview of the variables used in the models in the upcoming chapters will be given.

2.1 Parameter of interest

2.1.1 Assumptions about failure behaviour

Given the scope of this project, two distinct types of assumptions about failure behaviourare of interest:

1. Assumption that the arrival rate of failures of a component is constant with a specifiedfailure rate, i.e. the arrival process is a Poisson process, when maintenance activitiesare executed according to a certain maintenance program;

2. Assumption that the lifetime of a component follows a Weibull distribution with spec-ified mean time to failure (MTTF ) and shape parameter b. For this type of assump-tions the shape parameter 6= 1, because otherwise this type is equal to type 1.

enterFirstly, these types differ in the way the maintenance executed is taken into account. Thefirst type is either a result of a certain preventive maintenance strategy, or of a failure basedmaintenance strategy. The assumption considers the remaining failures, given the mainte-nance executed. The second type gives an assumption considering the lifetime distributionof a component, if no preventive replacements would take place.

Secondly, a difference is found in the level at which the hypothesis is formulated. The firsttype considers a (constant) failure rate, which corresponds to the block ‘Failure of compo-nent’ in figure 1.1. In contrast to the first type, the second type of assumptions considersthe lifetime distribution of a component, which corresponds to the block ‘Lifetime distribu-tion’ in figure 1.1. This lifetime distribution defines probabilities for a component having acertain lifetime.

Thirdly, the way in which the assumptions are obtained differ. The first type of assumptionsis given by suppliers, and is formulated at NedTrain by people who design the maintenance

8

Page 19: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 2. DESIGN OF HYPOTHESIS TEST 9

0 5 10 15Time

0

0.5

1

1.5Failure

rate

shape = 1shape = 2shape = 5

Figure 2.1 – Failure rate functions for different Weibull shape parameters

program. The second type is only formulated at NedTrain by experts, or as a result ofa lifetime test. Currently, NedTrain is enhancing their methods for estimating Weibulllifetime distributions. When experts at NedTrain assume a failure rate that is low at firstand increases around the mean time to failure, this corresponds to the second type ofassumptions by choosing a value for the shape parameter > 1 in the Weibull distribution.This is shown in figure 2.1.

2.1.2 Performance indicator

We need a relevant and measurable performance indicator to assess the validity of initial as-sumptions about the failure behaviour of components. This performance indicator becomesthe parameter of interest in the hypothesis test. The chosen performance indicator is thecumulative number of corrective replacements of identical components under considerationfor all trains within a train series up to a moment in time. This choice is based on:

Risk focus of the project. The focus of the project is to determine whether the risk isnot higher than expected. Since a failure of a component affects the reliability andrisk, a deviation in the number of corrective replacements is a practically significantdeviation.

Relation with all assumptions. All assumptions can be defined in terms of expectednumber of corrective replacements up to an evaluation moment, if a failure leads to acorrective replacement.

Data availability. The total number of corrective replacements is generally available data.These data are available at NedTrain in the information system Maximo, or an olderversion of this system. If a supplier performs the maintenance during a warrantyperiod, these data are shared.

Scarcity of failures. Lack of sufficient data to properly run statistical analyses is a majorproblem in failure data, as stated by Louit et al [15]. By choosing both the cumulativenumber, and total for all positions at which identical components are located, weminimise this problem.

Page 20: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 2. DESIGN OF HYPOTHESIS TEST 10

Figure 2.2 – Decomposition train series of NedTrain

2.2 Maintenance rules

In the previous section, we (I) defined two types of assumptions about failure behaviour,and (II) determined that the cumulative number of corrective replacements is a relevantperformance indicator for the validity of the assumptions. Thus, the assumptions aboutthe failure behaviour have to be translated to the expected cumulative number of correctivereplacements. This translation depends on the maintenance rule applied. The specific usagebased maintenance rule depends on the component under consideration. Therefore, we firstexplain how we may decompose the fleet of NedTrain to component level. Secondly, weexplain which usage based maintenance rule applies to the components.

2.2.1 Components

The decomposition of the fleet of NedTrain is shown graphically in figure 2.2. This figurewill be explained in this section. ‘NS Reizigers’ owns a fleet of trains that are maintainedby NedTrain. These trains can be categorised in certain train series, which are trains withthe same construction, e.g. SGM, VIRM and SLT. The trains of one series have differentmoments of inflow. At these inflow moments a batch of trains starts running services. Everybatch consists of traceable units, which are in figure 2.2 indicated by the icons of the trains.In practice, these traceable units are trains, and main components. A traceable unit canbe tracked by a unique serial number. For main components this signifies that it is regis-tered when a specific component is at a position of a train. The time is always registered,sometimes the kilometrages are also registered. In trains we consider positions at whichcomponents fulfil certain functions. Components within one train can be identical. Thenumber of identical components is known per train. A traceable component can be trackedby its serial number, or by the serial number of a traceable unit it belongs to, if only oneposition for a component per serial number is in place. A component is not traceable ifmore than one position per serial number is in place.

In this project, we consider assumptions about the failure behaviour of components beingline replaceable units, i.e. the parts are removed as a whole from their positions in the train.These components can be consumables, repairables or main components. Consumables willbe discarded when they fail. Repairables will be revised at a repair shop. Main componentsare revised in a repair shop or a maintenance depot. If a part is removed from a train, an‘as new’ identical part is put at the old position. This is called a replacement. The depots,

Page 21: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 2. DESIGN OF HYPOTHESIS TEST 11

which execute the further repair actions for the removed repairables or main components,consider the further decomposition.

2.2.2 Maintenance rule

If a failure based maintenance rule is applied, parts are only replaced if they have failed. Ausage based maintenance rule prescribes a threshold level considering the usage. If this levelis reached, a preventive maintenance activity is executed. If a component fails before thislevel, a corrective maintenance activity is executed. If this corrective maintenance activityis the replacement of a component, this is within scope of this thesis.

The specific usage based maintenance rule depends on the traceability of the component.If a component is traceable, the threshold level is applied to the specific part. We refer tothis as age replacement : an individual part is replaced preventively after it has been usedfor a fixed amount of usage (e.g. time or kilometres), or correctively if it fails before thistime. If a component is not traceable, the threshold level is applied to the traceable unitthe specific part belongs to, e.g. a train. We refer to this type of maintenance rule as blockreplacement : an individual part is replaced preventively after the traceable unit it belongsto has been used for a fixed amount of usage (e.g. time or kilometres) independent of theage of the individual part, and correctively if an individual part fails before this time.

The application of a failure based, age replacement, or block replacement maintenance rule,influences the replacement moments of the components. Thus, it influences the expectedvalue of the cumulative number of corrective replacements.

2.3 Scenarios

As explained in the introduction of section 2.2 we need to translate the assumptions madeabout failure behaviour to the expected cumulative number of corrective replacements. Thistranslation depends on the maintenance rule applied as described in section 2.2. In thissection, we define four scenarios that are considered to translate the assumption to theexpected cumulative number of corrective replacements. These scenarios are a combinationof the assumption made about failure behaviour, and the maintenance rule applied:

1. Assumption that the arrival process of failures of a component is a Poisson process;2. Assumption that the lifetime of a component follows a Weibull distribution, and failure

based maintenance is applied;3. Assumption that the lifetime of a component follows a Weibull distribution, and block

replacement is applied;4. Assumption that the lifetime of a component follows a Weibull distribution, and age

replacement is applied.enterEach scenario is a description of a practical situation at NedTrain. Because for each scenariothe translation of the assumption about failure behaviour to the cumulative number ofcorrective replacements up to a moment in time is different, four models are necessary forthis translation. The models are presented in chapters 3 to 6.

Page 22: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 2. DESIGN OF HYPOTHESIS TEST 12

2.4 Steps in hypothesis test design

The objective of the hypothesis test is to indicate whether the number of corrective replace-ments of a component up to a moment in time is as expected. While the tested value ofthe parameter of interest will be different for every scenario, the structure of the hypothesistest is equal for each scenario. Therefore, before introducing the models for the specificscenarios, in this section the basic steps are explained, which are equal for each scenario.

2.4.1 Formulate hypothesis

We start with formulating a hypothesis. This hypothesis is a statement about the cumu-lative number of corrective replacements of identical components under consideration forall trains within a train series up to an evaluation moment, which is denoted by µ. Theevaluation moment is the point in time at which the hypothesis is tested. The tested valueof this parameter of interest is denoted by E[N (m)(te)], conform the notation used in themodels in the upcoming chapters. The evaluation moment is denoted by te, the numberof the scenario by m, and the arrival process of corrective replacements by N (m)(t). Thetested value can be determined based on the models presented in chapters 3 to 6.

The sign of the alternative hypothesis depends on the aim of the test. To evaluate whetherthe assumption about the failure behaviour is correct, we want to know whether the numberof corrective replacements is different from the expected, both higher and lower values (i.e.two sided, 2). To assess whether the number of corrective replacements is truly higherthan the expected based on the assumption about the failure behaviour, i.e. the checkingrisk purpose, it needs to be checked whether there are more corrective replacements thanexpected (i.e. single sided, 1). For details considering hypothesis testing the reader isredirected to appendix E. This leads to the following hypothesis test formulations:

Correctness check (2) Risk check (1)

H0 : µ = E[N (m)(te)] H0 : µ = E[N (m)(te)]

H1 : µ 6= E[N (m)(te)] H1 : µ > E[N (m)(te)]

2.4.2 Determine bounds

To determine whether to reject H0, we need to be able to evaluate an observed value of thetotal number of corrective replacements up to the evaluation moment, te. This evaluation isbased on a lower and/or upper bound. For the correctness check a lower and upper boundfor the observed value of number of corrective replacements in the period are needed. Forthe risk check an upper bound is needed, as rejecting H0 is a strong conclusion. If theobserved value lies outside these bounds, the hypothesis is rejected.

As stated earlier, these bounds depend on the distribution characteristics of the parameterof interest, which are determined based on models. These models translate an assumptionabout failure behaviour to a distribution of the cumulative number of corrective replace-ments of identical components under consideration for all trains within a train series upto an evaluation moment. This translation is different for every scenario. Figure 2.3 sum-marises the methodology, and relates it to the models for the different scenarios describedin chapters 3 to 6.

Page 23: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 2. DESIGN OF HYPOTHESIS TEST 13

16/0

3/16

14:

00Pr

evie

w

Page

1 o

f 1

Con

stan

t arr

ival

rate

Aim

of

the

test

Cor

rect

ness

che

ckR

isk

chec

k

:μ=

E[(

)]H

0N

(m) te

:μ≠

E[(

)]H

1N

(m) te

Nee

ded:

low

er a

nd u

pper

bou

ndfo

r exp

ecte

d nu

mbe

r of

failu

res

upon

eva

luat

ion

mom

ent

Aim

of

the

test

Cor

rect

ness

che

ckR

isk

chec

k

Nee

ded:

upp

er b

ound

for

expe

cted

num

ber o

f fa

ilure

s up

on e

valu

atio

n m

omen

t

Mai

nten

ance

rule

Blo

ck re

plac

emen

tA

ge re

plac

emen

t

Scen

ario

1Sc

enar

io 3

Scen

ario

4

Cha

pter

3C

hapt

er 6

Cha

pter

5

:μ=

E[(

)]H

0N

(m) te

:μ≠

E[(

)]H

1N

(m) te

:μ=

E[(

)]H

0N

(m) te

:μ>

E[(

)]H

1N

(m) te

Failu

re b

ased

mai

nten

ance

Scen

ario

2

Cha

pter

4

Ass

umpt

ion Nee

ded:

low

er a

nd u

pper

bou

ndfo

r exp

ecte

d nu

mbe

r of

failu

res

upon

eva

luat

ion

mom

ent

Nee

ded:

upp

er b

ound

for

expe

cted

num

ber o

f fa

ilure

s up

on e

valu

atio

n m

omen

t

Wei

bull

lifet

ime

dist

ribut

ion

:μ=

E[(

)]H

0N

(m) te

:μ>

E[(

)]H

1N

(m) te

Figure 2.3 – Summary of steps in hypothesis test

Page 24: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 2. DESIGN OF HYPOTHESIS TEST 14

A prerequisite for using the bounds for monitoring corrective replacements, is that time inthe field must be a relevant indicator for the usage of a component. Time in the field is theperiod a train or component is running services. If time in the field is a relevant indicator,the assumption about failure behaviour can be defined in terms of time. Time is used as aparameter, since the usage of a traceable unit in terms of time is always known, the dataabout number of corrective replacements are given in time, and it makes the methodologyeasy to use. In practice, other units of measurement can be relevant, e.g. kilometres inthe field. If the assumptions are formulated in another unit of measurement, those can bereformulated in terms of time. Implicitly, we assume that every train is used equally, as thetime in the field passes by equally, which is justified by the scope of the project.

2.4.3 Determine performance of the test

The performance of the hypothesis test is determined by considering numerical examples.In this way we assess:

1. The correctness of the analytically determined expected value and variance of thenumber of corrective replacements;

2. The probability of rejecting the hypothesis if the true failure behaviour is not differentfrom the expected, i.e. realised significance level ;

3. The probability of rejecting the hypothesis, if the true failure behaviour is differentfrom the expected behaviour, i.e. power.

enterThe concepts significance level, α, and power, 1 − β, are indicated in grey in figure 2.4.This figure shows the relations between the truth or falseness of the null hypothesis, andthe truth or falseness of the result of the test. No hypothesis test can give a completelycertain answer. If the H0 is correct, but the H0 is rejected, H0 is rejected incorrectly. Theprobability of such a false alarm is measured by the significance level. If the H1 is correct,and H0 is rejected, H0 is rejected correctly. This probability of correctly rejecting a falsenull hypothesis is measured by the power.

These concepts are chosen for evaluation, since rejection of H0 signals a deviation from theexpected behaviour, and thus leads to action in practice. How the realised significance leveland power are determined, is explained in appendix G.4. The realised significance levelshould be approximately equal to the chosen α level. Assessing this is particularly relevantbecause an approximation is used to determine the lower and/or upper bound for the test.

16/03/16 14:11Preview

Page 1 of 1

Result of the test:

Trut

h:

Performance ofhypothesis test

1 − αH0is correct

H0 H0not rejected rejected

confidence

αsignificance level

βH1 1 − β

P(Type II error) poweris correct

Figure 2.4 – Truth versus result of hypothesis test

Page 25: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 2. DESIGN OF HYPOTHESIS TEST 15

2.4.4 Explanation performance tables

To present the performance of the hypothesis test, performance tables are constructed. Anexample of a performance table is given in table 2.1. In this subsection, the structure ofthe tables is explained.

Table 2.1: Performance table example 4.1 (MTTF = 10, b = 2, α = 0.10)

Evaluation moment (te): 2 5 10 15 20E[N (2)(te)] 1.2431 9.4420 37.8854 73.2456 108.6410V ar[N (2)(te)] 1.2175 8.4778 25.7169 36.3332 44.6157Simulated mean 1.2377 ± 0.0104 9.4528 ± 0.0271 37.9139 ± 0.0276 73.3039 ± 0.0472 108.6621 ± 0.0754Simulated variance 1.2314 ± 0.0197 8.4869 ± 0.1088 25.7380 ± 0.3188 37.1069 ± 0.4492 44.8326 ± 0.6469Correctness checkLower bound -0.5718 4.6528 29.5441 63.3309 97.6542Upper bound 3.0581 14.2313 46.2268 83.1603 119.6278Realised significance level 0.0377 ± 0.0021 0.0819 ± 0.0023 0.0935 ± 0.0024 0.0997 ± 0.0024 0.0998 ± 0.0024Power MTTF = 8, b = 2 0.1261 ± 0.0024 0.4669 ± 0.0025 0.9069 ± 0.0024 0.9903 ± 0.0010 0.9994 ± 0.0003Power MTTF = 12, b = 2 0.0097 ± 0.0013 0.1964 ± 0.0025 0.6472 ± 0.0025 0.8927 ± 0.0024 0.9634 ± 0.0014Power MTTF = 10, b = 1 0.9561 ± 0.0020 0.9983 ± 0.0004 0.9894 ± 0.0008 0.9417 ± 0.0025 0.8964 ± 0.0025Risk checkUpper bound 2.6572 13.1735 44.3844 80.9704 117.2011Realised significance level 0.1284 ± 0.0025 0.0865 ± 0.0024 0.0987 ± 0.0025 0.1131 ± 0.0024 0.0959 ± 0.0025Power MTTF = 8, b = 2 0.3039 ± 0.0024 0.5820 ± 0.0024 0.9544 ± 0.0019 0.9976 ± 0.0005 0.9999 ± 0.0001Power MTTF = 10, b = 1 0.9865 ± 0.0016 0.9992 ± 0.0002 0.9942 ± 0.0006 0.9719 ± 0.0019 0.9246 ± 0.0022

The caption of the table shows the characteristics of the example considered in the table.In table 2.1, example 4.1 is considered. This is the first example in chapter 4. In thisexample, a component is assumed to have a mean time to failure (MTTF ) of 10 years, ashape parameter b = 2, and the chosen alpha value for this table is 0.10.

In the first row, the evaluation moments are given. For these evaluation moments the ex-pected value and variance of the number of corrective replacements up to the evaluationmoment, E[N (m)(te)] and V ar[N (m)(te)] respectively, are shown. These are the analyticallydetermined values. Beneath these values, the mean and variance of the simulated numberof corrective replacements are shown.

Based on the analytically determined expected value and variance, the bounds for the cor-rectness check and risk check are determined. In subsection 2.4.1, these tests have beendefined. The bounds, realised significance level, and power values are shown separately forthese checks. Firstly, the results for the correctness check are given. Secondly, the resultsfor the risk check are given. To determine the power value we choose different parametersfor the lifetime distribution of the components than the assumption, i.e. its MTTF andshape parameter b. The chosen values are indicated in the first column.

For simulated values the half-width of the 95% confidence intervals is indicated by the ±sign, e.g. ±0.0025 indicates a half-width of 0.0025. Why this is indicated, and how thevalue is obtained, can be found in appendix G.2.

Page 26: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 2. DESIGN OF HYPOTHESIS TEST 16

2.5 List of variables

The models to determine the tested value of the hypothesis test will be presented in chapters3 to 6. The variables used in the models will be introduced gradually in these chapters. Inthis section, an overview of the variables is given. This overview can be used to easily findthe meaning of the notation. A complete overview of the notation is given in appendix B.

a Scale parameter of Weibull distributionα Level of significance, probability of incorrectly rejecting a valid H0 hypothesisb Shape parameter of Weibull distributionβ Probability of incorrectly accepting a H0 hypothesis

1− β Power, probability of correctly rejecting a H0 hypothesisc Batch, c ∈ CC Set of batches, i.e. train series. C = {1, 2, ..}

Ci,c(t) Arrival process of replacements of a component at a specific single position iof batch c ∈ C without maintenance execution in scenario 3

Cc(t) Arrival process of replacements of a component at one single positionof batch c ∈ C without maintenance execution in scenario 3

Dc(t) Second moment of the number of replacements of a component up to tat one single position of batch c ∈ C without maintenance execution

in scenario 3, i.e. E[(Cc)2

(t)]δ Step size for discretisation of Weibull distributioni Number of specific positionlc Number of positions with component under consideration in batch cλ Arrival rate of replacements for one single position

N(m)i,c (t) Arrival process of replacements of a component at a specific single position i

of batch c ∈ C up to time t, in scenario m

N(m)c (t) Arrival process of replacements of a component at one single position

of batch c ∈ C, up to time t, in scenario m

N(m)c (t) Arrival process of replacements of a component at

lc positions of batch c ∈ C, up to time t, in scenario m

N (m)(t) Arrival process of replacements of a component at all positionsof one train series up to time t, in scenario m

p(m)j P (j − δ < X < j) for scenario m

t0c The time at which batch c ∈ C starts running servicestc Time in the field of batch c ∈ Ctac Age of process after last preventive replacement

under a block replacement maintenance rulete Evaluation moment

τ (3) Block replacement preventive maintenance term, in terms of time

τ (4) Age replacement preventive maintenance term, in terms of time

V(m)c (t) Second moment of the number of replacements of a component up to t

at one single position of batch c ∈ C, i.e. E[(N

(m)c

)2(t)]

up to time t, in scenario mXn Random variable denoting lifetime in scenarios 2, 3, and 4zα Critical value: P (Z ≤ zα) = 1− α, with Z ∼ Norm(0, 1)

Page 27: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

3

Hypothesis test for scenario 1

In this chapter, a model is introduced for determining a lower and upper bound for thehypothesis test introduced in chapter 2 for scenario 1. Scenario 1 is characterised by anassumption about a Poisson arrival process of failures of a component. In section 3.1 themodel will be introduced with the necessary notation. Section 3.2, will give an overview ofthe assumptions made in the model and will compare those with the practice at NedTrain.In section 3.3, the model is evaluated. The model allows us to determine a lower and upperbound for the hypothesis test which is presented in sections 3.4 and 3.5. In section 3.6, anillustrative example will be given. The performance of the hypothesis test for this scenariowill be discussed in section 3.7.

3.1 Model

Consider a train series consisting of trains of the same type. Trains within this train seriesstart running services at several moments in time. A group of trains which started runningservices at the same moment, is called a batch. We consider one component in such a train.The physical location of the component in the train, is called its position. Identical com-ponents may be located at more than one position per train. In the model we consider allthose identical components positioned in a train series.

The set of batches is denoted by C, and the number of batches is denoted by |C|. Thebatches are numbered c = 1, 2, ..., |C|. In each train of such a batch one or more identicalcomponents of interest are positioned. The total number of positions of the component ofinterest in batch c ∈ C is denoted by lc. The specific positions are numbered i = 1, 2, ..., lc.An example is shown graphically in figure 3.1.

16/03/16 14:04Preview

Page 1 of 1

Batch c = 1

Train series

Identical components

Batch c = 2

|C| = 2

= 4l1 = 6l2

Figure 3.1 – Notation train series

17

Page 28: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 3. HYPOTHESIS TEST FOR SCENARIO 1 18

At each position, failures occur according to a Poisson process with a constant rate, de-noted by λ(> 0). When a component fails, the failed part is replaced by a spare part.This spare part is identical to the failed part. We consider the cumulative number of re-placements up to several time instants for all trains within a train series, i.e. the totalnumber of replacements from t = 0 up to different moments of the time horizon. These mo-ments of the time horizon are called evaluation moments, and are denoted by te. The timeat which a batch c ∈ C starts running services, i.e. its inflow moment, is denoted by t0c < te.

To determine a lower and upper bound for the hypothesis test, the expected number ofreplacements and its variance for all trains within a train series up to given time instantsneeds to be determined. This number is based on the arrival process of replacements up to

time t for a single position i in batch c ∈ C, which is denoted by N(m)i,c (t) (m denotes the

number of the scenario. Thus, in this chapter m = 1). If specifying the specific positionis not relevant for the expression, we delete the index i. Given the constant failure rate

λ, N(1)i,c (t) is a Poisson process. The arrival process of replacements up to time t of all lc

positions with the component of interest within batch c ∈ C is:

N (1)c (t) =

lc∑i=1

N(1)i,c (t) (3.1)

The arrival process of replacements for all positions in a train series, i.e. in all |C| batches,up to time t is:

N (1)(t) =∑c∈C

N (1)c (t) (3.2)

An example is shown graphically in figure 3.2. Based on the expected value and variance ofN (1)(t), a lower and/or upper bound for the hypothesis test for scenario 1 are determined.These bounds are derived in subsections 3.3 and 3.4.

18/03/16 14:50Preview

Page 1 of 2

t01 t0

2

t01 t0

2

t01 t0

2

t02t0

1

( )∑i=1

l1N (1)

i,1 te = ( )N (1)1 te

= ( )N (1)2 te

( )N (1) te

+

te

c = 1 c = 2

Time

t = 0

te

c = 1 c = 2

Time

t = 0

t1

t2

te

c = 1 c = 2

Time

t = 0

t1

t2

( )∑i=1

l2N (1)

i,2 te

( )∑i=1

l1N (2)

i,1 te

( )N (2) te

+( )∑

i=1

l2N (2)

i,2 te

= ( )N (2)1 te

= ( )N (2)2 te

(τ)∑i=1

l1C (3)

i,1 (τ)∑i=1

l1C (3)

i,1 ( )∑i=1

l1C (3)

i,1 ta1

(τ)∑i=1

l2C (3)

i,2 ( )∑i=1

l2C (3)

i,2 ta2

te

c = 1 c = 2

Time

t = 0t1

t2

( )∑i=1

l1N (4)

i,1 te

( )N (4) te

+( )∑

i=1

l2N (4)

i,2 te

= ( )N (4)1 te

= ( )N (4)2 te

( )N (3) te

+

= ( )N (3)1 te

= ( )N (3)2 te

τ ta1τ

ta2τ

Figure 3.2 – Timeline scenario 1

3.2 Overview of assumptions

In section 2.2, we have introduced the practice at NedTrain. However, to be able to modelthe arrival process of the number of failures, some assumptions have to be made. Below weexplain the assumptions, and compare them with the practice at NedTrain.

Page 29: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 3. HYPOTHESIS TEST FOR SCENARIO 1 19

1. Every position has an identical component having an identical, independent, stationaryexponential lifetime distribution, with expected value 1/λ time units:This assumption is justified as long as failed parts are replaced by identical parts, andas long as we do not consider external influences on the behaviour of components.The effect of external influences are out of scope in this project.

2. The time to replace a part is negligibly small:This assumption is justified as long as the time to replace a component is smallcompared to the mean time to failure of the component. At NedTrain, a replacementis generally performed during the time that the train is at the depot for maintenance.If a component is very critical, a train can be taken out of service because of thespecific replacement. The replacement takes then 24 hours at maximum.

3. The trains of one batch start running services at the exact same moment:This assumption is justified as long as the average moment of inflow of the trainswithin a batch is used as the moment of inflow of the batch. In practice the trains ofone batch will start running services gradually. This may take up to 2 years for alltrains within a batch. One could choose to divide a batch into smaller batches.

3.3 Analysis

3.3.1 Single position

Our first objective is to evaluate N(1)c (te). The moment of inflow of a single position is

equal to the moment the batch of the train it belongs to started running services. Thus,the moment of inflow of this position is denoted by t0c .

Lemma 3.1. The expected number of replacements up to time te for a single position inbatch c ∈ C, and its variance, are:

E[N (1)c (te)] = λ(te − t0c)

V ar[N (1)c (te)] = λ(te − t0c)

(3.3)

Proof. This proof is evident by the definition of a Poisson process, cf. Montgomery [16].

As N(1)i,c (t) is defined as a Poisson process with arrival rate λ per time unit, N

(1)c (te) follows

a Poisson distribution with parameter λ(te − t0c).

3.3.2 Trains of one batch

Our second objective is to evaluate N(1)c (te).

Lemma 3.2. The expected number of replacements up to time te of all lc positions with thecomponent of interest within batch c ∈ C, and its variance, are:

E[N (1)c (te)] = lcλ(te − t0c)

V ar[N (1)c (te)] = lcλ(te − t0c)

(3.4)

Proof. As N(1)c (te) follows an identical and independent Poisson distribution for each po-

sition, the sum of these processes again follows a Poisson distribution with expected value

E[N(1)c (te)] = lcλ(te−t0c), and variance V ar[N

(1)c (te)] = lcλ(te−t0c), cf. Theorem 8, appendix

E.3.

Page 30: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 3. HYPOTHESIS TEST FOR SCENARIO 1 20

3.3.3 Train series

Our third objective is to evaluatie N (1)(te).

Lemma 3.3. The expected number of replacements for all positions in a train series, withthe component of interest up to time te, and its variance, are:

E[N (1)(te)] =∑c∈C

lcλ(te − t0c)

V ar[N (1)(te)] =∑c∈C

lcλ(te − t0c)(3.5)

Proof. As N(1)c (te), ∀c ∈ C, follows an independent Poisson distribution, the sum of these

processes again follows a Poisson distribution with expected value E[N (1)(te)] =∑

c∈C E[N(1)c (te)],

and variance V ar[N (1)(te)] =∑

c∈C V ar[N(1)c (te)], cf. Theorem 8, appendix E.3.

3.4 Lower and upper bound

In this section, the lower and/or upper bound for the hypothesis test are determined. Thesebounds are based on the expected number of replacements and its variance for the trainseries, as introduced in subsection 3.3.3. To determine the bounds, the critical value of thestandard normal distribution is used, which is denoted by zα, and has a value such thatP (Z ≤ zα) = 1− α, with Z ∼ Norm(0, 1).

Theorem 1. The approximate lower and upper bound for the correctness check are:

(E[N (1)(te)]− zα/2√V ar[N (1)(te)], E[N (1)(te)] + zα/2

√V ar[N (1)(te)]) (3.6)

and, the approximate upper bound for the risk check is:

E[N (1)(te)] + zα

√V ar[N (1)(te)] (3.7)

Proof. Consider N (1)(te), the arrival process of replacements for all positions in a train se-ries, with the component of interest up to time te, as introduced in Lemma 3.3 in subsection3.3.3. To determine a lower and upper bound for the number of replacements for the trainseries, we use the fact that the number of replacements up to te can be approximated by anormal distribution:

N (1)(te) ∼ Norm(E[N (1)(te)], V ar[N (1)(te)]) (3.8)

The approximation of the Poisson distribution by a normal distribution becomes betterwhen E[N (1)(te)] becomes bigger, i.e. |C|, lc, λ and/or (te − t0c) become bigger. Arule of thumb for determining whether the approximation is good, is checking whetherE[N (1)(te)] > 5, see Montgomery [16].

Based on this, we can determine a confidence interval for a level of significance, α, usingthe critical value zα of the standard normal distribution.

Remark 3.1. To determine whether the approximation is good, it should be checked whetherE[N (1)(te)] > 5. For example, if we consider a batch c with 30 positions with a componenthaving a failure rate of λ = 1/10 per year and t0c = 0, the advised minimal te = 5

30×1/10 =1.67 years.

Page 31: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 3. HYPOTHESIS TEST FOR SCENARIO 1 21

3.5 Solving

After having established the equations for determining the lower and upper bound, theseequations are implemented in Matlab, to be able to determine the bounds for specificinstances. The verification of this implementation in Matlab can be found in appendix G.5.

3.6 Example

In this section, an example is described, that is used to illustrate the usage of the model inthe hypothesis test.

Example 3.1. Consider a train series which consists of two batches. We consider a com-ponent in such a train with an assumed λ = 1/10 per year. The first batch consists of 20trains, the second batch consists of 15 trains. These trains all have two positions at whichthe component is located. The first batch starts running services at t = 0 and the secondbatch starts running services two years later. Failed parts are replaced by spare parts. Weevaluate the validity of the assumption after 12 years with a significance level α = 0.05.

enterThis description results in the following evaluation:

Example 3.1 lc tc0 te Single position Trains of one batch Train series

c = 1 40 0 12 E[N(1)1 (12)] = 1.2 E[N

(1)1 (12)] = 48

E[N (1)(12)] = 78V ar[N

(1)1 (12)] = 1.2 V ar[N

(1)1 (12)] = 48

c = 2 30 2 12 E[N(1)2 (12)] = 1 E[N

(1)2 (t)] = 30

V ar[N (1)(12)] = 78V ar[N

(1)2 (12)] = 1 V ar[N

(1)2 (12)] = 30

The lower and upper bound for the correctness check are: (78 − 1.96√

78, 78 + 1.96√

78),and the upper bound for the risk check is: 78+1.65

√78. Based on this, we get the following

hypothesis tests:

Correctness check Risk check

H0 : µ = 78 H0 : µ = 78H1 : µ 6= 78 H1 : µ > 78

Lower bound: 60.69Upper bound: 95.31 Upper bound: 92.53

If the realised number of corrective replacements after 12 years is outside the bounds, it isunlikely that the assumed λ is valid.

3.7 Discussion on performance

As explained in subsection 2.4.3 we determine the performance of the test by consideringa numerical example at several evaluation moments (te). Example 3.1 is the numericalexample considered. The most relevant information of the performance tables is highlightedin this section. Discussed are the correctness of the expected value and variance of thenumber of corrective replacements, the realised significance levels, and the power of thehypothesis test. The complete performance tables can be found in appendix H.1.

Page 32: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 3. HYPOTHESIS TEST FOR SCENARIO 1 22

3.7.1 Expected value and variance

The analytically determined expected value and variance and the simulated mean andvariance for example 3.1 are given in table 3.1.

Table 3.1: Mean and variance example 3.1 (λ = 1/10)

Evaluation moment (te) 2 5 10 15 20

E[N (1)(te)] 8 29 64 99 134V ar[N (1)(te)] 8 29 64 99 134Simulated mean 8.0001 ± 0.0410 28.9820 ± 0.0483 63.9670 ± 0.0832 98.9910 ± 0.0712 133.9800 ± 0.1132Simulated variance 8.0119 ± 0.1036 29.0680 ± 0.4320 64.1180 ± 0.7819 98.6300 ± 1.7071 134.3300 ± 1.8680

We conclude that the analytically determined values are determined correctly, because theanalytically determined values lie within the 95% confidence intervals of the simulatedvalues.

3.7.2 Significance level

The realised significance levels of example 3.1 are given in table 3.2. In appendix H.1, theconfidence intervals of the values are given. The maximum half-width of the confidenceinterval is 0.0025.

Table 3.2: Realised significance levels example 3.1 (λ = 1/10)

Evaluation moment (te) 2 5 10 15 20

E[N (1)(te)] 8 29 64 99 134Correctness checkChosen α = 0.05 0.0471 0.0507 0.0527 0.0499 0.0517Chosen α = 0.10 0.1040 0.1140 0.0920 0.0950 0.0920Risk checkChosen α = 0.05 0.0636 0.0604 0.0498 0.0534 0.0487Chosen α = 0.10 0.1120 0.1150 0.0980 0.1040 0.1070

We conclude that the normal approximation is effective for this purpose, and is better forhigher E[N (1)(te)]. The significance levels are not exactly equal to the chosen α values,because of the fact that the discrete, nonnegative Poisson distribution is approximated by acontinuous normal distribution. Herewith, the confidence interval is based on a distributionwhich has a slightly different shape and is supported on R.

For α = 0.05 the significance level is close to 0.05 from the beginning. For α = 0.10 thesignificance level is closer to 0.10 when E[N (1)(te)] is higher. Especially the significancelevel of the single-sided interval becomes better, which is explained by the asymmetry ofthe Poisson distribution for smaller E[N (1)(te)]. The difference between the correctness forα = 0.05 and α = 0.10 is explained by the fact that the coverage of the Poisson distributionby the normal distribution is more important for greater α.

3.7.3 Power

In this subsection, the power values of the correctness check are evaluated. Moreover, thepower of the correctness check is compared with the power of the risk check. In appendix

Page 33: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 3. HYPOTHESIS TEST FOR SCENARIO 1 23

H.1, the confidence intervals of the values are given. The maximum half-width of the con-fidence interval is 0.0025. In general, we conclude that the power of the hypothesis test ishigh when considering deviations as presented, especially for higher failure rates. This isbeneficial from a risk point of view.

A selection of the power values of the correctness check of example 3.1 are shown graphicallyin figure 3.3.

0 2 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

6 = 1/5, , = 0:056 = 1/8, , = 0:056 = 1/12, , = 0:056 = 1/15, , = 0:05

0 2 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

6 = 1/5, , = 0.056 = 1/5, , = 0.106 = 1/12, , = 0.056 = 1/12, , = 0.10

Figure 3.3 – Power values for correctness check example 3.1 (λ = 1/10)

Based on the left graph, we conclude that the probability of finding a deviation increaseswith time and greater deviations. Note that a deviation of the mean time to failure of 2years (20%) is noticed earlier for λ = 1/8 than λ = 1/12 (and similarly λ = 1/5 is noticedearlier than λ = 1/15), while the deviation is the same in percentages. This difference isexplained by the fact that corrective replacements will be done more often. This is beneficialfrom a risk point of view.

In the right graph we add the power values for the α = 0.10 case for a λ = 1/5 andλ = 1/12. Based on this graph, we conclude that a deviation is found with a higher proba-bility when α is higher, which is explained by the stricter bounds as a result of the higher α.

In addition to the power of the correctness check, the power of the risk check is evaluatedfor example 3.1. The power values for an α = 0.10 compared to the correctness check areshown graphically in figure 3.4. Considered are power values for detecting a higher λ, asthis is relevant for the risk check.

Based on the graph in figure 3.4, we conclude that the probability of finding a deviationis higher in the risk check than in the correctness check. This is due to the stricter upperbound. The hypothesis test has a high power after a few years for this example. Thus, thistest is very useful when checking whether the risk is not higher than expected.

Page 34: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 3. HYPOTHESIS TEST FOR SCENARIO 1 24

0 2 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

6 = 1=5, , = 0:10, correctness6 = 1=5, , = 0:10, risk6 = 1=8, , = 0:10, correctness6 = 1=8, , = 0:10, risk

0 5 10 15 20Evaluation moment te (year)

0

0.02

0.04

0.06

0.08

0.1

0.12

Pow

er

MTTF = 10, b = 1, , = 0:10, correctnessMTTF = 10, b = 1, , = 0:10, risk

Figure 3.4 – Power values for correctness versus risk check example 3.1 (λ = 1/10)

Page 35: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

4

Hypothesis test for scenario 2

In this chapter, a model is introduced for determining a lower and upper bound for thehypothesis test introduced in chapter 2 for scenario 2. Scenario 2 is characterised by com-ponents having a Weibull lifetime distribution and failure based maintenance. In section4.1, the model will be introduced with the necessary notation. Section 4.2 will give anoverview of the assumptions made in the model and will compare those with the practiceat NedTrain. In section 4.3, the model is evaluated. The model allows us to determine alower and upper bound for the hypothesis test which is presented in sections 4.4 and 4.5.In section 4.6, an illustrative example will be given. The performance of the hypothesis testfor this scenario will be discussed in section 4.7.

4.1 Model

Consider a train series consisting of trains of the same type. Trains within this train seriesstart running services at several moments in time. A group of trains which started runningservices at the same moment, is called a batch. We consider one component in such a train.The physical location of the component in the train, is called its position. Identical com-ponents may be located at more than one position per train. In the model we consider allthose identical components positioned in a train series.

The set of batches is denoted by C, and the number of batches is denoted by |C|. Thebatches are numbered c = 1, 2, ..., |C|. In each train of such a batch one or more identicalcomponents of interest are positioned. The total number of positions of the component ofinterest in batch c ∈ C is denoted by lc. The specific positions are numbered i = 1, 2, ..., lc.An example is shown graphically in figure 3.1.

The lifetime of the component follows a Weibull distribution. If a component reaches theend of its lifetime, the failed part is replaced by a spare part. This spare part is identical tothe failed part, but as good as new. The independently and identically distributed time be-tween replacements, i.e. the mean time to failure (MTTF ), at a single position is denotedby random variable Xn, where n denotes the number of the replacement. The sequenceX1, X2, ... is a sequence of nonnegative random variables with common probability distri-

bution fX(x) = ba

(xa

)b−1e−(x/a)

b, which is the Weibull distribution with scale parameter a

and shape parameter b.

We consider the cumulative number of replacements up to several time instants for alltrains within a train series, i.e. the total number of replacements from t = 0 up to different

25

Page 36: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 4. HYPOTHESIS TEST FOR SCENARIO 2 26

moments of the time horizon. These moments of the time horizon are called evaluationmoments, and are denoted by te. The time at which a batch c ∈ C starts running servicesis denoted by t0c . For notational convenience, we introduce the variable tc, which is the timea batch c ∈ C is in the field up to evaluation moment te, i.e. tc = te − t0c > 0.

To determine a lower and upper bound for the hypothesis test, we have to determine theexpected number of replacements and its variance for all trains within a train series upto given time instants. This number is based on the arrival process of replacements up

to time t for a single position i in batch c ∈ C, which is denoted by N(m)i,c (t) (m denotes

the number of the scenario. Thus, in this chapter m = 2). If specifying the specificposition is not relevant for the expression, we delete the index i. Given the independently

and identically distributed time between replacements, N(2)i,c (t) is a renewal process. For

notational convenience, we introduce notation for denoting the second moment of N(2)c (te),

being V(2)c (te) (= E[

(N

(2)c

)2(t)]). The arrival process of replacements up to time t of all lc

positions with the component of interest within batch c ∈ C is:

N (2)c (t) =

lc∑i=1

N(2)i,c (t) (4.1)

The arrival process of replacements for all positions in a train series, i.e. in all |C| batches,up to time t is:

N (2)(t) =∑c∈C

N (2)c (t) (4.2)

An example is shown graphically in figure 4.1. Based on the expected value and variance ofN (2)(t), a lower and/or upper bound for the hypothesis test for scenario 2 are determined.These bounds are derived in sections 4.3 and 4.4.

18/03/16 14:50Preview

Page 1 of 2

t01 t0

2

t01 t0

2

t01 t0

2

t02t0

1

( )∑i=1

l1N (1)

i,1 te = ( )N (1)1 te

= ( )N (1)2 te

( )N (1) te

+

te

c = 1 c = 2

Time

t = 0

te

c = 1 c = 2

Time

t = 0

t1

t2

te

c = 1 c = 2

Time

t = 0

t1

t2

( )∑i=1

l2N (1)

i,2 te

( )∑i=1

l1N (2)

i,1 te

( )N (2) te

+( )∑

i=1

l2N (2)

i,2 te

= ( )N (2)1 te

= ( )N (2)2 te

(τ)∑i=1

l1C (3)

i,1 (τ)∑i=1

l1C (3)

i,1 ( )∑i=1

l1C (3)

i,1 ta1

(τ)∑i=1

l2C (3)

i,2 ( )∑i=1

l2C (3)

i,2 ta2

te

c = 1 c = 2

Time

t = 0t1

t2

( )∑i=1

l1N (4)

i,1 te

( )N (4) te

+( )∑

i=1

l2N (4)

i,2 te

= ( )N (4)1 te

= ( )N (4)2 te

( )N (3) te

+

= ( )N (3)1 te

= ( )N (3)2 te

τ ta1τ

ta2τ

Figure 4.1 – Timeline scenario 2

4.2 Overview of assumptions

Compared to scenario 1, assumption 1 is changed and assumption 2 is added. For theexplanations of the other assumptions, the reader is redirected to section 3.1.

Page 37: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 4. HYPOTHESIS TEST FOR SCENARIO 2 27

1. Every position has an identical component having an identical, independent, stationaryWeibull lifetime distribution, with scale parameter a and shape parameter b:This assumption is justified as long as failed parts are replaced by identical as goodas new parts, and as long as we do not consider external influences on the behaviourof components. The effect of external influences are out of scope in this project.

2. At the moment of inflow, and after replacement, the position holds an as good as newcomponent:The lifetime is assumed to follow a Weibull distribution. Therefore, the condition ofa component influences its time to failure. This assumption is justified as long as afailed part is replaced by an as new spare part. At NedTrain, this is assumed forrepairables and consumables, which are the relevant components for this scenario.

3. The time to replace a part is negligibly small4. The trains of one batch start running services at the exact same moment

4.3 Analysis

4.3.1 Single position

Our first objective is to evaluate N(2)c (te). The moment of inflow of a single position is equal

to the moment the batch of the train it belongs to starts running services. Thus, the timein the field of this position is denoted by tc.

Lemma 4.1. The expected number of replacements up to time te for a single position inbatch c ∈ C is:

E[N (2)c (te)] = FX(tc) +

∫ tc

0E[N (2)

c (tc − x)]fX(x)dx (4.3)

The second moment is:

V (2)c (te) = FX(tc) + 2

∫ tc

0E[N (2)

c (tc − x)]fX(x)dx+

∫ tc

0V (2)c (tc − x)fX(x)dx (4.4)

The variance of the number of replacements up to time te for a single position in batchc ∈ C is:

V ar[N (2)c (te)] = V (2)

c (te)− E[N (2)c (te)]2 (4.5)

Proof. As N(2)c (t) is defined as a renewal process, the expected value of the number of

replacements up to time te is given in equation (4.3), and the variance is given in equation(4.5). For the interested reader, the derivations of these equations can be found in appendixF.1.

4.3.2 Trains of one batch

Our second objective is to evaluate N(2)c (te).

Lemma 4.2. The expected number of replacements up to time te of all lc positions with thecomponent of interest within batch c ∈ C, and its variance, are:

E[N (2)c (te)] = lcE[N (2)

c (te)]

V ar[N (2)c (te)] = lcV ar[N

(2)c (te)]

(4.6)

Page 38: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 4. HYPOTHESIS TEST FOR SCENARIO 2 28

Proof. As N(2)c (te) is identical and independent for each position, the sum of lc of these

processes has an expected value of E[N(2)c (te)] = lcE[N

(2)c (te)], and variance V ar[N

(2)c (te)] =

lcV ar[N(2)c (te)], cf. Theorem 5, appendix E.3.

4.3.3 Train series

Our third objective is to evaluate N (2)(te).

Lemma 4.3. The expected number of replacements for all positions in a train series, withthe component of interest up to time te, and its variance, are:

E[N (2)(te)] =∑c∈C

E[N (2)c (te)]

V ar[N (2)(te)] =∑c∈C

V ar[N (2)c (te)]

(4.7)

Proof. As N(2)c (te) is identical and independent for each batch, the sum of |C| of these pro-

cesses has an expected value of E[N (2)(te)] =∑

c∈C E[N(2)c (te)], and variance V ar[N (2)(te)] =∑

c∈C V ar[N(2)c (te)], cf. Theorem 5, appendix E.3.

4.4 Lower and upper bound

In this section, the lower and/or upper bound for the hypothesis test are determined. Thesebounds are based on the expected number of replacements and its variance for the trainseries, as introduced in subsection 4.3.3. To determine the bounds, the critical value of thestandard normal distribution is used, which is denoted by zα, and has a value such thatP (Z ≤ zα) = 1− α, with Z ∼ Norm(0, 1).

Theorem 2. The approximate lower and upper bound for the correctness check are:

(E[N (2)(te)]− zα/2√V ar[N (2)(te)], E[N (2)(te)] + zα/2

√V ar[N (2)(te)]) (4.8)

and, the approximate upper bound for the risk check is:

E[N (2)(te)] + zα

√V ar[N (2)(te)] (4.9)

Proof. Consider N (2)(te), the arrival process of replacements for all positions in a train se-ries, with the component of interest up to time te, as introduced in Lemma 4.3 in subsection4.3.3. To determine a lower and upper bound for the number of replacements for the trainseries, we use the fact that the number of replacements up to te can be approximated by anormal distribution:

N (2)(te) ∼ Norm(E[N (2)(te)], V ar[N (2)(te)]) (4.10)

Firstly, this normalisation is justified by the asymptotic result of a renewal process, cf. The-orem 9, appendix E.3. By this result, we normalise the renewal processes per position, i.e.

N(2)c (te) ∼ Norm(E[N

(2)c (te)], V ar[N

(2)c (te)]). Therefore, the total number of replacements

is normally distributed, cf. Theorem 7, appendix E.3. Secondly, we consider typically 30

Page 39: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 4. HYPOTHESIS TEST FOR SCENARIO 2 29

components or more per batch. As a result, the total number of corrective replacementsper batch approximately follows a normal distribution by the regular central limit theorem.

The approximation of the number of replacements by a normal distribution becomes betterwhen E[N (2)(te)] becomes bigger, i.e. |C|, lc, 1/MTTF and/or (te − t0c) become bigger.No general criterion exists for checking whether the approximation is good. To ensure thatthe lower bound does not become negative, one can consider the following rule of thumb:E[N (2)(te)] > 5, in accordance with what we saw in chapter 3.

Based on this, we can determine a confidence interval for a level of significance, α, usingthe critical value zα of the standard normal distribution.

4.5 Numerical evaluation

After having established the equations for determining the lower and upper bound, theseequations can be numerically evaluated. Equations (4.3) and (4.4) are special cases of anintegral equation which is known in numerical analysis as a Volterra integral equation of thesecond kind. Many methods are available for numerical evaluation. Two well-known andfast methods for numerical evaluation are value iteration, and Riemann-Stieltjes integral.By using the basic concepts of the theory of Riemann-Stieltjes integration, a simple anddirect solution method can be given, see Tijms [21]. We enhance this method by applyingthe trapazoid rule.

Therefore, the expressions for solving are as follows:

E[N (2)c (t)] ≈ FX(t) +

t−δ∑j=δ

E[N (2)c (t− j)]p(2)j

V (2)c (t) ≈ FX(t) +

t−δ∑j=δ

(2E[N (2)

c (t− j)]p(2)j + V (2)c (t− j)p(2)j

)p(2)j = P (j − δ < X < j) =

δ

2(fX(j) + fX(j − δ))

j = (δ, 2δ, 3δ, ..., t− δ)

(4.11)

By discretising the renewal process the expected number of failures is underestimated.Therefore, the upper bounds might be stricter than necessary. This is due to the fact thatwe construct a discrete renewal process, in which the failure of a component is assessedonly once a period, while the inter arrival distribution is continuous. This discretisation isexecuted by dividing the time in steps of a size δ. The size of δ determines how accuratethe solution is: a smaller δ gives more accurate results. The choice of a δ size dependson the shape of the probability distribution, length of the interval, the desired accuracy inthe answers, and the desired computation time. To determine a suitable δ, we evaluate the

effect of δ on the result of E[N(2)c (t)]. More specifically, we evaluate the relative difference in

E[N(2)c (t)] for the computations for a δ and δ/2, as proposed by Tijms [21]. If this relative

difference is less than 5e−3%, we choose the latter as the size of δ.

Page 40: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 4. HYPOTHESIS TEST FOR SCENARIO 2 30

00.020.040.060.080.1Size of /

0246810121416182022

Com

puta

tion

tim

ein

seco

nds

0

0.5

1

Relative

di,

eren

cein

resu

lt(%

)

Computation time in secondsRelative di,erence in result

00.020.040.060.080.1Size of /

0

0.5

1

Com

puta

tion

tim

ein

seco

nds

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

Relative

di,

eren

cein

resu

lt(%

)

Zoom

Figure 4.2 – Computation time versus relative difference in result (MTTF = 10, b = 1,time interval = 50)

The equations for determining the lower and upper bound are implemented in Matlab, tobe able to determine the lower and/or upper bounds for specific instances. The verificationof this implementation in Matlab can be found in appendix G.5. The sums are solved bymultiplying vectors instead of recursively, for enhancing the computation time and thuspossible accuracy. An example of determining δ is plotted in figure 4.2. The left graphshows for different sizes of δ the computation time on the one side (left y-axis, blue solidline), and the relative difference on the other side (right y-axis, red dashed line). The rightgraph shows the lower part of the left graph. An extreme example is chosen: high varianceand a long time interval compared to the mean time to failure (MTTF ). For this case thechosen δ becomes 3.906e−3. As shown in figure 4.2 the computation time is still low for thisδ. This δ will be used in the numerical evaluations in this thesis.

4.6 Example

In this section, an example is described, that is used to illustrate the usage of the model inthe hypothesis test.

Example 4.1. Consider a train series which consists of two batches. We consider acomponent in such a train with an assumed mean time to failure of 10 years and b = 2(a = 10/[Γ(1/2 + 1)] = 11.28). The first batch consists of 20 trains, the second batch con-sists of 15 trains. These trains all have two positions at which the component is located.The first batch starts running services at t = 0 and the second batch starts running servicestwo years later. Failed parts are replaced by spare parts. We evaluate the validity of theassumption after 12 years with a significance level α = 0.05.

enter

Page 41: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 4. HYPOTHESIS TEST FOR SCENARIO 2 31

This description results in the following evaluation:

Example 4.1 lc tc0 te Single position Trains of one batch Train series

c = 1 40 0 12 E[N(2)1 (12)] = 0.8268 E[N

(2)1 (12)] = 33.072

E[N (2)(12)] = 51.794V(2)1 (12) = 1.1505

V ar[N(2)1 (12)] = 0.4668 V ar[N

(2)1 (12)] = 18.672

c = 2 30 2 12 E[N(2)2 (12)] = 0.6240 E[N

(2)2 (12)] = 18.720

V ar[N (2)(t)] = 30.778V(2)2 (12) = 0.7929

V ar[N(2)2 (12)] = 0.4035 V ar[N

(2)2 (12)] = 12.105

The bounds are given in table 4.1. These bounds are calculated in the same way as explainedin section 3.6. If the realised number of corrective replacements after 12 years is outsidethe bounds, it is unlikely that the assumed mean time to failure and b are valid.

Table 4.1: Hypothesis test example 4.1

Correctness check Risk check

H0 : µ = 51.794 H0 : µ = 51.794H1 : µ 6= 51.794 H1 : µ > 51.794

Lower bound: 40.92Upper bound: 62.67 Upper bound: 60.92

4.7 Discussion on performance

As explained in subsection 2.4.3 we determine the performance of the test by consideringa numerical example at several evaluation moments (te). Example 4.1 is the numericalexample considered. The most relevant information of the performance tables is highlightedin this section. Discussed are the correctness of the analytically determined expected valueand variance of the number of corrective replacements, the realised significance levels, andthe power of the hypothesis test. The complete performance tables can be found in appendixH.2.

4.7.1 Expected value and variance

The analytically determined expected value and variance and the simulated mean andvariance for example 4.1 are given in table 4.2.

Table 4.2: Mean and variance example 4.1 (MTTF = 10, b = 2)

Evaluation moment (te) 2 5 10 15 20

E[N (2)(te)] 1.2431 9.4420 37.8854 73.2456 108.6410V ar[N (2)(te)] 1.2175 8.4778 25.7169 36.3332 44.6157Simulated mean 1.2411 ± 0.0104 9.4377 ± 0.0312 37.8637 ± 0.0473 73.2358 ± 0.0790 108.6619 ± 0.0690Simulated variance 1.2261 ± 0.0202 8.4910 ± 0.1218 25.8403 ± 0.3415 36.6416 ± 0.4856 44.6972 ± 0.7352

We conclude that the analytically determined values are determined correctly, because theanalytically determined values lie within the 95% confidence intervals of the simulatedvalues.

Page 42: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 4. HYPOTHESIS TEST FOR SCENARIO 2 32

4.7.2 Significance level

The realised significance levels of example 4.1 are given in table 4.3. In appendix H.2, theconfidence intervals of the values are given. The maximum half-width of the confidenceinterval is 0.0025.

Table 4.3: Realised significance levels example 4.1 (MTTF = 10, b = 2)

Evaluation moment (te) 2 5 10 15 20

E[N (2)(te)] 1.2431 9.4420 37.8854 73.2456 108.6410Correctness checkChosen α = 0.05 0.0359 0.0377 0.0488 0.0472 0.0518Chosen α = 0.10 0.0377 0.0819 0.0935 0.0997 0.0998Risk checkChosen α = 0.05 0.0355 0.0480 0.0471 0.0440 0.0530Chosen α = 0.10 0.1284 0.0865 0.0987 0.1131 0.0959

We conclude that the normal approximation is effective for this purpose, and is better forhigher E[N (2)(te)]. Why the realised significance levels are not exactly equal to the chosenα values has been explained in section 3.7.2. In this scenario, more time is necessary for therealised significance levels to lie closer to the chosen α values. This is due to (I) the lowerexpected values, (II) the shape of the true probability distribution, which is approximatedby the symmetric normal distribution.

4.7.3 Power

In this subsection, the power values of the correctness check are evaluated. Moreover, thepower of the correctness check is compared with the power of the risk check. In appendixH.2, the confidence intervals of the values are given. The maximum half-width of the con-fidence interval is 0.0025. In general, we conclude that the power of the hypothesis test ishigh when considering deviations in the mean time to failure as presented, especially forlower mean times to failure. As a lower mean time to failure results in more correctivereplacements, this is beneficial from a risk point of view.

A selection of the power values of the correctness check of example 4.1 are shown graphicallyin figure 4.3.

Based on the first and second graph, we conclude that the probability of finding a deviationin the MTTF increases with time, greater deviations, and greater α. This conclusion is inaccordance with what we saw in section 3.7.3.

Based on the third graph, we conclude that the probability of finding a deviation in theshape parameter b changes in time. This change is caused by the long term behaviour ofa renewal process. Therefore, if the mean time to failure is 10 years, in the end the failurerate per component is 1/10 per year. As a result, the number of corrective replacementswill be similar.

Page 43: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 4. HYPOTHESIS TEST FOR SCENARIO 2 33

0 2 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 5, b = 2, , = 0:05MTTF = 8, b = 2, , = 0:05MTTF = 12, b = 2, , = 0:05MTTF = 15, b = 2, , = 0:05

0 10 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 10, b = 1, , = 0:05MTTF = 10, b = 3, , = 0:05

0 2 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

erMTTF = 5, b = 2, , = 0.05MTTF = 5, b = 2, , = 0.10MTTF = 12, b = 2, , = 0.05MTTF = 12, b = 2, , = 0.10

Figure 4.3 – Power values for correctness check example 4.1 (MTTF = 10, b = 2)

To further explain the last conclusion we consider a straightforward example:

Component BatchesMTTF b lc tc0

Example 4.2 10 2 c = 1 30 0

By using example 4.2, the observed number of corrective replacements is a result of thestochastic processes, and is not explained by an increase in the number of components. Ascan be seen in figure 4.4, the difference between the true mean for a shape parameter b = 3and the expected value for a shape parameter b = 2 is small and changing in time, whichsupports the last conclusion. For a different mean time to failure, the mean does deviate.

0 2 5 10 15 20Evaluation moment te (year)

-10

0

10

20

30

40

50

60

70

Numberofcorrectivereplacements

E[N (2)(te)] (MTTF = 10, b = 2)Upper bound (, = 0:05)Lower bound (, = 0:05)True mean (MTTF = 8, b = 2)True mean (MTTF = 10, b = 3)

Figure 4.4 – Graphical representation of correctness check example 4.2 (MTTF = 10, b = 2)

Page 44: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 4. HYPOTHESIS TEST FOR SCENARIO 2 34

In addition to the power of the correctness check, the power of the risk check is evaluatedfor example 4.1. The power values for an α = 0.10 compared to the correctness check areshown graphically in figure 4.5. Considered are power values for detecting a lower MTTFand b, as this is relevant for the risk check.

0 2 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 5, b = 2, , = 0:10, correctnessMTTF = 5, b = 2, , = 0:10, riskMTTF = 8, b = 2, , = 0:10, correctnessMTTF = 8, b = 2, , = 0:10, risk

0 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 10, b = 1, , = 0:10, correctnessMTTF = 10, b = 1, , = 0:10, risk

Figure 4.5 – Power values for correctness versus risk check example 4.1 (MTTF = 10, b = 2)

Based on the graphs in figure 4.5, we conclude that the probability of finding a deviationis higher in the risk check than in the correctness check. This is due to the stricter upperbound. This conclusion applies to both the detection of a deviation in the MTTF and inthe shape parameter b. The hypothesis test has a high power after a few years for thisexample. Thus, this test is very useful when checking whether the risk is not higher thanexpected.

Page 45: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

5

Hypothesis test for scenario 3

In this chapter, a model is introduced for determining a lower and upper bound for thehypothesis test introduced in chapter 2 for scenario 3. Scenario 3 is characterised by com-ponents having a Weibull lifetime distribution and block replacement maintenance rule. Insection 5.1, the model will be introduced with the necessary notation. Section 5.2 will givean overview of the assumptions made in the model and will compare those with the practiceat NedTrain. In section 5.3, the model is evaluated. The model allows us to determine alower and upper bound for the hypothesis test which is presented in sections 5.4 and 5.5.In section 5.6, an illustrative example will be given. The performance of the hypothesis testfor this scenario will be discussed in section 5.7.

5.1 Model

Consider a train series consisting of trains of the same type. Trains within this train seriesstart running services at several moments in time. A group of trains which started runningservices at the same moment, is called a batch. We consider one component in such a train.The physical location of the component in the train, is called its position. Identical com-ponents may be located at more than one position per train. In the model we consider allthose identical components positioned in a train series.

The set of batches is denoted by C, and the number of batches is denoted by |C|. Thebatches are numbered c = 1, 2, ..., |C|. In each train of such a batch one or more identicalcomponents of interest are positioned. The total number of positions of the component ofinterest in batch c ∈ C is denoted by lc. The specific positions are numbered i = 1, 2, ..., lc.An example is shown graphically in figure 3.1.

The lifetime of the component follows a Weibull distribution. If a component reaches theend of its lifetime, the failed part is replaced correctively by a spare part. This sparepart is identical to the failed part, but as good as new. The independently and identicallydistributed lifetime of a component is denoted by random variable X, with common prob-

ability distribution fX(x) = ba

(xa

)b−1e−(x/a)

b, which is the Weibull distribution with scale

parameter a and shape parameter b. Moreover, an individual part is replaced preventively,every time the train in which it is positioned is used for a period of length τ (3), independentof the age of the individual part.

We consider the cumulative number of corrective replacements up to several given timeinstants for all trains within a train series, i.e. the total number of corrective replacements

35

Page 46: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 5. HYPOTHESIS TEST FOR SCENARIO 3 36

from t = 0 up to different moments of the time horizon. These moments of the time horizonare called evaluation moments, and are denoted by te. The time at which a batch c ∈ Cstarts running services is denoted by t0c . For notational convenience, we introduce the vari-able tc, which is the time a batch c ∈ C is in the field up to evaluation moment te, i.e.tc = te − t0c > 0. In order to determine the expected number of corrective replacements upto te of a position in c ∈ C, we divide tc in a number of blocks (btc/τ (3)c), and the age afterthe last preventive replacement, denoted by tac = tc − btc/τ (3)cτ (3) ≥ 0.

18/03/16 14:50Preview

Page 2 of 2

t01 t0

2t01 t0

2 te

c = 1 c = 2

Time

t = 0t1

t2

(2 × (τ) + ( ))∑i=1

l1C (3)

i,1 C (3)i,1 ta

1

( )N (3) te

+

= ( )N (3)1 te

= ( )N (3)2 te

τ ta1τ

ta2τ

(1 × (τ) + ( ))∑i=1

l2C (3)

i,2 C (3)i,2 ta

2

te

c = 1 c = 2

Time

t = 0

t1

t2

( )N (3) te

+

= ( )N (3)1 te

= ( )N (3)2 te

τ (3) ta1τ (3)

ta2τ (3)

( )∑i=1

l1N (3)

i,1 te

( )∑i=1

l2N (3)

i,2 te

Figure 5.1 – Timeline scenario 3

To determine a lower and upper bound for the hypothesis test, we have to determine theexpected number of corrective replacements and its variance for all trains within a trainseries up to given time instants. This number is based on the arrival process of replacementsup to time t for a single position i in batch c ∈ C. The arrival process of correctivereplacements up to time t for a single position i in batch c ∈ C if no preventive maintenancewould have been executed, is denoted by Ci,c(t). If specifying the specific position is notrelevant for the expression, we delete the index i. However, every deterministic periodlength τ (3) a part is replaced preventively. Thus, every τ (3), i.e. block, Ci,c(t) starts again.The resulting process of corrective replacements up to time t for a single position i in batch

c ∈ C is denoted by N(m)i,c (t) (m denotes the number of the scenario. Thus, in this chapter

m = 3). For notational convenience, we introduce notation for denoting the second moment

of Cc(t), being Dc(t) (=E[(Cc)2

(t)]). The arrival process of replacements up to time t ofall lc positions with the component of interest within batch c ∈ C is:

N (3)c (t) =

lc∑i=1

N(3)i,c (t) (5.1)

Page 47: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 5. HYPOTHESIS TEST FOR SCENARIO 3 37

The arrival process of replacements for all positions in a train series, i.e. in all |C| batches,up to time t is:

N (3)(t) =∑c∈C

N (3)c (te) (5.2)

An example is shown graphically in figure 5.1. Based on the expected value and variance ofN (3)(t), a lower and/or upper bound for the hypothesis test for scenario 3 are determined.These bounds are derived in sections 5.3 and 5.4.

5.2 Overview of assumptions

Compared to scenario 2, assumption 3 is added. For the explanations of the other assump-tions, the reader is redirected to sections 3.1 and 4.1.

1. Every position has an identical component having an identical, independent, stationaryWeibull lifetime distribution, with scale parameter a and shape parameter b

2. At the moment of inflow, and after replacement, the position holds an as good as newcomponent

3. Individual components are replaced preventively, if the train of which it is an elementreaches an age of exactly τ (3) time units:This assumption is justified as long as the maintenance rule is not changed. In prac-tice, the exact moment of a preventive replacement depends for example on planningrestrictions at the depot, but will be close to the prescribed maintenance term. Themaintenance term, τ (3), mentioned in the maintenance program will be a maximum.

4. The time to replace a component is negligibly small5. The trains of one batch start running services at the exact same moment

5.3 Analysis

5.3.1 Single position

Our first objective is to evaluate Cc(t). The expected value and its variance can be evaluatedin the same way as equations (4.3) and (4.5) respectively.

Lemma 5.1. The expected number of corrective replacements up to time t for a singleposition in batch c ∈ C, if no preventive maintenance would have been executed, is:

E[Cc(t)] = FX(t) +

∫ t

0E[Cc(t− x)]fX(x)dx (5.3)

The second moment is:

Dc(te) = FX(tc) + 2

∫ tc

0E[Cc(tc − x)]fX(x)dx+

∫ tc

0Dc(tc − x)fX(x)dx (5.4)

The variance of the number of replacements up to time te for a single position in batchc ∈ C is:

V ar[Cc(t)] = Dc(t)− E[Cc(t)]2 (5.5)

Proof. As Cc(t) is defined as a renewal process, the expected value of the number of replace-ments up to time te is given in equation (5.3), and the variance of the number of failuresup to time t is given in equation (5.5). For the interested reader, the derivations of theseequations can be found in appendix F.1.

Page 48: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 5. HYPOTHESIS TEST FOR SCENARIO 3 38

Our second objective is to evaluate N(3)c (te). The moment of inflow of a single position is

equal to the moment the batch of the train it belongs to started running services. Thus,the time in the field of this position is denoted by tc, and the age after the last preventivereplacement is denoted by tac

Lemma 5.2. The expected number of replacements up to time te for a single position inbatch c ∈ C is:

E[N (3)c (te)] = btc/τ (3)c × E[Cc(τ

(3))] + E[Cc(tc − btc/τ (3)cτ (3)

)]

V ar[N (3)c (te)] = btc/τ (3)c × V ar[Cc(τ (3))] + V ar[Cc

(tc − btc/τ (3)cτ (3)

)]

(5.6)

Proof. As Cc(t) is an identical and independent process that starts every time interval τ (3),we can sum the mean and variance of the individual processes, cf. Theorem 5, appendix E.3.The expected value and variance of the number of corrective replacements per block areE[Cc(τ

(3))] and V ar[Cc(τ(3))] respectively. The number of blocks is deterministic and equal

to btc/τ (3)c. Thus, the expected number of corrective replacements upon the last preventivereplacement in tc is: btc/τ (3)c × E[Cc(τ

(3))], and its variance: btc/τ (3)c × V ar[Cc(τ(3))].

Next, we add the expected value and variance of the process during tac , i.e. E[Cc(tac )] and

V ar[Cc(tac )] respectively.

5.3.2 Trains of one batch

Our third objective is to evaluate N(3)c (te).

Lemma 5.3. The expected number of replacements up to time te of all lc positions with thecomponent of interest within batch c ∈ C, and its variance, are:

E[N (3)c (te)] = lcE[N (3)

c (te)]

V ar[N (3)c (te)] = lcV ar[N

(3)c (te)]

(5.7)

Proof. As N(3)c (te) is identical and independent for each position, the sum of lc of these

processes has an expected value of E[N(3)c (te)] = lcE[N

(3)c (te)], and variance V ar[N

(3)c (te)] =

lcV ar[N(3)c (te)], cf. Theorem 5, appendix E.3.

5.3.3 Train series

Our fourth objective is to evaluate N (3)(te).

Lemma 5.4. The expected number of replacements for all positions in a train series, withthe component of interest up to time te, and its variance, are:

E[N (4)(te)] =∑c∈C

E[N (4)c (te)]

V ar[N (4)(te)] =∑c∈C

V ar[N (4)c (te)]

(5.8)

Proof. As N(3)c (te) is identical and independent for each batch, the sum of |C| of these pro-

cesses has an expected value of E[N (3)(te)] =∑

c∈C E[N(3)c (te)], and variance V ar[N (3)(te)] =∑

c∈C V ar[N(3)c (te)], cf. Theorem 5, appendix E.3.

Page 49: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 5. HYPOTHESIS TEST FOR SCENARIO 3 39

5.4 Lower and upper bound

In this section, the lower and/or upper bound for the hypothesis test are determined. Thesebounds are based on the expected number of replacements and its variance for the trainseries, as introduced in subsection 5.3.3. To determine the bounds, the critical value of thestandard normal distribution is used, which is denoted by zα, and has a value such thatP (Z ≤ zα) = 1− α, with Z ∼ Norm(0, 1).

Theorem 3. The approximate lower and upper bound for the correctness check are:

(E[N (3)(te)]− zα/2√V ar[N (3)(te)], E[N (3)(te)] + zα/2

√V ar[N (3)(te)]) (5.9)

and, the approximate upper bound for the risk check is:

E[N (3)(te)] + zα

√V ar[N (3)(te)] (5.10)

Proof. Consider N (3)(te), the arrival process of replacements for all positions in a train se-ries, with the component of interest up to time te, as introduced in Lemma 5.4 in subsection5.3.3. To determine a lower and upper bound for the number of replacements for the trainseries, we use the fact that the number of replacements up to te can be approximated by anormal distribution:

N (3)(te) ∼ Norm(E[N (3)(te)], V ar[N (3)(te)]) (5.11)

As we typically consider 30 components or more per batch, the total number of replacementsper batch approximately follows a normal distribution by the regular central limit theorem.Therefore, the total number of corrective replacements is also normally distributed, cf. The-orem 7, appendix E.3.

The approximation of the number of replacements by a normal distribution becomes betterwhen E[N (3)(te)] becomes bigger, i.e. |C|, lc, 1/MTTF , (te−t0c), and/or τ (3) become bigger.No general criterion exists for checking whether the approximation is good. To ensure thatthe lower bound does not become negative, one can consider the following rule of thumb:E[N (3)(te)] > 5, in accordance with what we saw in chapter 3.

Based on this, we can determine a confidence interval for a level of significance, α, usingthe critical value zα of the standard normal distribution.

5.5 Numerical evaluation

After having established the equations for determining the lower and upper bound, theseequations can be numerically evaluated. The equations for determining E[Cc(t)] andV ar[Cc(t)] are numerically evaluated in the same way as given in equations (4.11), in sec-tion 4.5. These equations are implemented in Matlab, to be able to determine the boundsfor specific instances. The verification of this implementation in Matlab can be found inappendix G.5.

Page 50: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 5. HYPOTHESIS TEST FOR SCENARIO 3 40

5.6 Example

In this section, an example is described, that is used to illustrate the usage of the model inthe hypothesis test.

Example 5.1. Consider a train series which consists of two batches. We consider acomponent in such a train with an assumed mean time to failure of 10 years and b = 2(a = 10/[Γ(1/2 + 1)] = 11.28). The first batch consists of 20 trains, the second batch con-sists of 15 trains. These trains all have two positions at which the component is located.The first batch starts running services at t = 0 and the second batch starts running servicestwo years later. Failed parts are replaced by spare parts. Moreover, a block replacementmaintenance rule applies to these parts. We consider τ (3) = 5. We evaluate the validity ofthe assumption after 12 years with a significance level α = 0.05.

enterThis description results in the following evaluation:

Example 5.1 lc tc0 tac te Single position Trains of one batch Train series

c = 1 40 0 2 12 E[C1(5)] = 0.1843

E[N (3)(12)] = 27.041

E[C1(2)] = 0.0311

D1(5) = 0.1965

D1(2) = 0.0314

V ar[C1(5)] = 0.1625

V ar[C1(2)] = 0.0304

E[N(3)1 (12)] = 0.3997 E[N

(3)1 (12)] = 15.985

V ar[N(3)1 (12)] = 0.3554 V ar[N

(3)1 (12)] = 14.219

c = 2 30 2 0 12 E[C2(5)] = 0.1843

V ar[N (3)(12)] = 23.970

D2(5) = 0.1965

V ar[C2(5)] = 0.1625

E[N(3)2 (12)] = 0.3686 E[N

(3)2 (12)] = 11.056

V ar[N(3)2 (12)] = 0.3250 V ar[N

(3)2 (12)] = 9.751

The bounds are given in table 5.1. These bounds are calculated in the same way as explainedin section 3.6. If the realised number of corrective replacements after 12 years is outsidethe bounds, it is unlikely that the assumed mean time to failure (MTTF ) and b are valid.

Table 5.1: Hypothesis test example 5.1

Correctness check Risk check

H0 : µ = 27.041 H0 : µ = 27.041H1 : µ 6= 27.041 H1 : µ > 27.041

Lower bound: 17.45Upper bound: 36.64 Upper bound: 35.09

5.7 Discussion on performance

As explained in subsection 2.4.3 we determine the performance of the test by consideringa numerical example at several evaluation moments (te). Example 5.1 is the numericalexample considered. The most relevant information of the performance tables is highlightedin this section. Discussed are the correctness of the expected value and variance of thenumber of corrective replacements, the realised significance levels, and the power of thehypothesis test. The complete performance tables can be found in appendix H.3.

Page 51: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 5. HYPOTHESIS TEST FOR SCENARIO 3 41

5.7.1 Expected value and variance

The analytically determined expected value and variance and the simulated mean andvariance for example 5.1 are given in table 5.2.

Table 5.2: Mean and variance example 5.1 (MTTF = 10, b = 2, τ (3) = 5)

Evaluation moment (te) 2 5 10 15 20

E[N (3)(te)] 1.2431 9.4420 22.3410 35.2400 48.1390V ar[N (3)(te)] 1.2175 8.4778 19.8540 31.2300 42.6060Simulated mean 1.2415 ± 0.0084 9.4606 ± 0.0293 22.3477 ± 0.0363 35.2403 ± 0.0450 48.1819 ± 0.0637Simulated variance 1.2234 ± 0.0297 8.5162 ± 0.1328 19.8691 ± 0.2009 31.3314 ± 0.3417 42.3154 ± 0.4683

We conclude that the analytically determined values are determined correctly, because theanalytically determined values lie within the 95% confidence intervals of the simulatedvalues.

5.7.2 Significance level

The realised significance levels of example 5.1 are given in table 5.3. In appendix H.3, theconfidence intervals of the values are given. The maximum half-width of the confidenceinterval is 0.0025.

Table 5.3: Realised significance levels example 5.1 (MTTF = 10, b = 2, τ (3) = 5)

Evaluation moment (te) 2 5 10 15 20

E[N (3)(te)] 1.2431 9.4420 22.3410 35.2400 48.1390Correctness checkChosen α = 0.05 0.0359 0.0353 0.0430 0.0491 0.0551Chosen α = 0.10 0.0363 0.0828 0.1163 0.1064 0.1054Risk checkChosen α = 0.05 0.0370 0.0483 0.0595 0.0501 0.0595Chosen α = 0.10 0.1300 0.0890 0.0877 0.0990 0.1010

We conclude that the normal approximation is effective for this purpose, and is better forhigher E[N (3)(te)]. Why the realised significance levels are not exactly equal to the chosenα values has been explained in section 3.7.2. In this scenario, more time is necessary for therealised significance levels to lie closer to the chosen α values. This is due to (I) the lowerexpected values, (II) the shape of the true probability distribution, which is approximatedby the symmetric normal distribution.

5.7.3 Power

In this subsection, the power values of the correctness check are evaluated. Next, the powerof the correctness check is compared with the power of the risk check. Lastly, the differencebetween the power of scenarios 2 and 3 is discussed. In appendix H.3, the confidence inter-vals of the values are given. The maximum half-width of the confidence interval is 0.0025.In general, we conclude that the power of the hypothesis test is high when considering de-viations in the mean time to failure and shape parameter as presented, especially for lowervalues of the mean time to failure and shape parameter. As these lower values result in

Page 52: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 5. HYPOTHESIS TEST FOR SCENARIO 3 42

more corrective replacements, this is beneficial from a risk point of view.

A selection of the power values of the correctness check of example 5.1 are shown graphicallyin figure 5.2.

0 2 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 5, b = 2, , = 0:05MTTF = 8, b = 2, , = 0:05MTTF = 12, b = 2, , = 0:05MTTF = 15, b = 2, , = 0:05

0 10 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 10, b = 1, , = 0:05MTTF = 10, b = 3, , = 0:05

0 2 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 5, b = 2, , = 0.05MTTF = 5, b = 2, , = 0.10MTTF = 12, b = 2, , = 0.05MTTF = 12, b = 2, , = 0.10

Figure 5.2 – Selection power values for correctness check example 5.1 (MTTF = 10, b =2, τ (3) = 5)

Based on the first and second graph, we conclude that the probability of finding a deviationin the MTTF increases with time, greater deviations, and greater α. This conclusion is inaccordance with what we saw in section 3.7.3.

Based on the third graph, we conclude that the probability of finding a deviation in theshape parameter b increases in time. As opposed to scenario 2, where deviating shape pa-rameters are unlikely to be found, in scenario 3 deviating shape parameters b are likely tobe found. This difference with scenario 2 is due to the fact that we do not consider the longterm behaviour of the Weibull distribution because of the block replacement in this scenario.

In addition to the power of the correctness check, the power of the risk check is evaluatedfor example 5.1. The power values for an α = 0.10 compared to the correctness check areshown graphically in figure 5.3. Considered are power values for detecting a lower MTTFand b, as this is relevant for the risk check.

Based on the graphs in figure 5.3, we conclude that the probability of finding a deviationis higher in the risk check than in the correctness check. This is due to the stricter upperbound. This conclusion applies to both the detection of a deviation in the MTTF and inthe shape parameter b. The hypothesis test has a high power after a few years for thisexample. Thus, this test is very useful when checking whether the risk is not higher thanexpected.

Moreover, the power values of scenarios 2 (example 4.1) and 3 (example 5.1) are compared.In the examples considered, the same failure behaviour is assumed. The relevant powervalues are shown in table 5.4. For te > τ (3) the probability of finding a deviating MTTF

Page 53: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 5. HYPOTHESIS TEST FOR SCENARIO 3 43

0 2 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 5, b = 2, , = 0:10, correctnessMTTF = 5, b = 2, , = 0:10, riskMTTF = 8, b = 2, , = 0:10, correctnessMTTF = 8, b = 2, , = 0:10, risk

0 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 10, b = 1, , = 0:10, correctnessMTTF = 10, b = 1, , = 0:10, risk

Figure 5.3 – Power values for correctness versus risk check example 5.1 (MTTF = 10, b =2, τ (3) = 5)

is lower in scenario 3 than in scenario 2, while considering the same deviations. This isexplained by the fact that the Weibull distributed lifetimes are interrupted by the blockreplacement.

Table 5.4: Power values correctness check scenarios 2 and 3 (MTTF = 10, b = 2, τ (3) =5, α = 0.05)

Evaluation moment (te) 5 10 15 20

Example 4.1 MTTF = 8, b = 2 0.3586 0.8727 0.9776 0.9981Example 5.1 MTTF = 8, b = 2 0.3564 0.6601 0.8471 0.9478

Example 4.1 MTTF = 12, b = 2 0.0934 0.4784 0.8171 0.9288Example 5.1 MTTF = 12, b = 2 0.0945 0.2820 0.4758 0.6123

Page 54: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

6

Hypothesis test for scenario 4

In this chapter, a model is introduced for determining a lower and upper bound for thehypothesis test introduced in chapter 2 for scenario 4. Scenario 4 is characterised by com-ponents having a Weibull lifetime distribution and age replacement maintenance rule. Insection 6.1, the model will be introduced with the necessary notation. Section 6.2 will givean overview of the assumptions made in the model and will compare those with the practiceat NedTrain. In section 6.3, the model is evaluated. The model allows us to determine alower and upper bound for the hypothesis test which is presented in sections 6.4 and 6.5.In section 6.6, an illustrative example will be given. The performance of the hypothesis testfor this scenario will be discussed in section 6.7.

6.1 Model

Consider a train series consisting of trains of the same type. Trains within this train seriesstart running services at several moments in time. A group of trains which started runningservices at the same moment, is called a batch. We consider one component in such a train.The physical location of the component in the train, is called its position. Identical com-ponents may be located at more than one position per train. In the model we consider allthose identical components positioned in a train series.

The set of batches is denoted by C, and the number of batches is denoted by |C|. Thebatches are numbered c = 1, 2, ..., |C|. In each train of such a batch one or more identicalcomponents of interest are positioned. The total number of positions of the component ofinterest in batch c ∈ C is denoted by lc. The specific positions are numbered i = 1, 2, ..., lc.An example is shown graphically in figure 3.1.

The lifetime of the component follows a Weibull distribution. If a component reaches theend of its lifetime, the failed part is replaced correctively by a spare part. This sparepart is identical to the failed part, but as good as new. The independently and identicallydistributed lifetime of a component is denoted by random variable X, with common prob-

ability distribution fX(x) = ba

(xa

)b−1e−(x/a)

b, which is the Weibull distribution with scale

parameter a and shape parameter b. Moreover, an individual part is replaced preventively,if the part reaches an age of τ (4).

We consider the cumulative number of corrective replacements up to several time instantsfor all trains within a train series, i.e. the total number of corrective replacements fromt = 0 up to different moments of the time horizon. These moments of the time horizon are

44

Page 55: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 6. HYPOTHESIS TEST FOR SCENARIO 4 45

called evaluation moments, and are denoted by te. The time at which a batch c ∈ C startsrunning services is denoted by t0c . For notational convenience, we introduce the variable tc,which is the time a batch c ∈ C is in the field up to evaluation moment te, i.e. tc = te−t0c > 0.

To determine a lower and upper bound for the hypothesis test, we have to determine theexpected number of corrective replacements and its variance for all trains within a trainseries up to given time instants. This number is based on the arrival process of correctivereplacements up to time t for a single position i in batch c ∈ C, which is denoted by

N(m)i,c (t) (m denotes the number of the scenario. Thus, in this chapter m = 4). If specifying

the specific position is not relevant for the expression, we delete the index i. Given the

independently and identically distributed time between corrective replacements, N(4)i,c (t)

is a renewal process. For notational convenience, we introduce notation for denoting the

second moment of N(4)c (te), being V

(4)c (te). The arrival process of replacements up to time

t of all lc positions with the component of interest within batch c ∈ C is:

N (4)c (t) =

lc∑i=1

N(4)i,c (t) (6.1)

The arrival process of replacements for all positions in a train series, i.e. in all |C| batches,up to time t is:

N (4)(t) =∑c∈C

N (4)c (t) (6.2)

An example is shown graphically in figure 6.1. Based on the expected value and variance ofN (4)(t), a lower and/or upper bound for the hypothesis test for scenario 4 are determined.These bounds are derived in sections 6.3 and 6.4.

18/03/16 14:50Preview

Page 1 of 2

t01 t0

2

t01 t0

2

t01 t0

2

t02t0

1

( )∑i=1

l1N (1)

i,1 te = ( )N (1)1 te

= ( )N (1)2 te

( )N (1) te

+

te

c = 1 c = 2

Time

t = 0

te

c = 1 c = 2

Time

t = 0

t1

t2

te

c = 1 c = 2

Time

t = 0

t1

t2

( )∑i=1

l2N (1)

i,2 te

( )∑i=1

l1N (2)

i,1 te

( )N (2) te

+( )∑

i=1

l2N (2)

i,2 te

= ( )N (2)1 te

= ( )N (2)2 te

(τ)∑i=1

l1C (3)

i,1 (τ)∑i=1

l1C (3)

i,1 ( )∑i=1

l1C (3)

i,1 ta1

(τ)∑i=1

l2C (3)

i,2 ( )∑i=1

l2C (3)

i,2 ta2

te

c = 1 c = 2

Time

t = 0t1

t2

( )∑i=1

l1N (4)

i,1 te

( )N (4) te

+( )∑

i=1

l2N (4)

i,2 te

= ( )N (4)1 te

= ( )N (4)2 te

( )N (3) te

+

= ( )N (3)1 te

= ( )N (3)2 te

τ ta1τ

ta2τ

Figure 6.1 – Timeline scenario 4

6.2 Overview of assumptions

Compared to scenario 3, an extra note is made in assumption 2, and assumption 3 is altered.For the explanations of the other assumptions, the reader is redirected to sections 3.1 and4.1.

Page 56: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 6. HYPOTHESIS TEST FOR SCENARIO 4 46

1. Every position has an identical component having an identical, independent, stationaryWeibull lifetime distribution, with scale parameter a and shape parameter b

2. At the moment of inflow, and after replacement, the position holds an as good as newcomponent:This assumption is justified as long as a failed part is replaced by an as new spare part.The lifetime is assumed to follow a Weibull distribution. Therefore, the condition of acomponent influences its time to failure. At NedTrain, this is a valid assumption forrepairables and consumables. For main components this depends on the time untilthe next maintenance activity is planned. If little time is left until a big plannedmaintenance activity, sometimes only a minimal repair is executed.

3. A component is replaced preventively if it reaches an age of exactly τ (4):This assumption is justified as long as the maintenance rule is not changed. In prac-tice, the exact moment of a preventive replacement depends for example on planningrestrictions at the depot, but will be close to the prescribed maintenance term. Themaintenance term, τ (4), mentioned in the maintenance program will be a maximum.

4. The time to replace a part is negligibly small5. The trains of one batch start running services at the exact same moment

6.3 Analysis

6.3.1 Single position

Our first objective is to evaluate N(4)c (te). The moment of inflow of a single position is equal

to the moment the batch of the train it belongs to starts running services. Thus, the timein the field of this position is denoted by tc.

Lemma 6.1. The expected number of corrective replacements up to time te for a singleposition in batch c ∈ C is:

E[N (4)c (te)] =

FX(tc) +

∫ tc0 E[N

(4)c (tc − x)]fX(x)dx tc < τ (4)

FX(τ (4)) +∫ τ (4)0 E[N

(4)c (tc − x)]fX(x)dx t ≥ τ (4)

+E[N(4)c (tc − τ (4))]

(1− FX(τ (4))

) (6.3)

The second moment is given by:

V (4)c (te) =

FX(tc) + 2

∫ tc0 E[N

(4)c (tc − x)]fX(x)dx+

∫ tc0 V

(4)c (tc − x)fX(x)dx tc < τ (4)

FX(τ (4)) + 2∫ τ (4)0 E[N

(4)c (tc − x)]fX(x)dx tc ≥ τ (4)

+∫ τ (4)0 V

(4)c (tc − x)fX(x)dx

+V(4)c (tc − τ (4))

(1− FX(τ (4))

)(6.4)

The variance of the number of corrective replacements up to time te for a single position inbatch c ∈ C is:

V ar[N (4)c (te)] = V (4)

c (te)− E[N (4)c (te)]2 (6.5)

Proof. Consider N(4)i,c (t) which is defined as a renewal process. If tc < τ (4), the time between

corrective replacements follows a Weibull distribution with a scale parameter a and shapeparameter b. As a result the expected number of corrective replacements and its variance is

Page 57: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 6. HYPOTHESIS TEST FOR SCENARIO 4 47

equal to scenario 2, see equations (4.3) and (4.5) in subsection 4.3.1. However, if tc ≥ τ (4),possibly the component reached an age of τ (4). Consequently, the component at the positionwas removed and got an as new condition. Therefore, we add this case in determining

the first moment E[N(4)c (te)] and second moment V

(4)c (te). For the interested reader, the

derivations of these equations can be found in appendix F.2.

6.3.2 Trains of one batch

Our second objective is to evaluate N(4)c (te).

Lemma 6.2. The expected number of replacements up to time te of all lc positions with thecomponent of interest within batch c ∈ C, and its variance, are:

E[N (4)c (te)] = lcE[N (4)

c (te)]

V ar[N (4)c (te)] = lcV ar[N

(4)c (te)]

(6.6)

Proof. As N(4)c (te) is identical and independent for each position, the sum of lc of these

processes has an expected value of E[N(4)c (te)] = lcE[N

(4)c (te)], and variance V ar[N

(4)c (te)] =

lcV ar[N(4)c (te)], cf. Theorem 5, appendix E.3.

6.3.3 Train series

Our third objective is to evaluate N (4)(te).

Lemma 6.3. The expected number of replacements for all positions in a train series, withthe component of interest up to time te, and its variance, are:

E[N (4)(te)] =∑c∈C

E[N (4)c (te)]

V ar[N (4)(te)] =∑c∈C

V ar[N (4)c (te)]

(6.7)

Proof. As N(4)c (te) is identical and independent for each batch, the sum of |C| of these pro-

cesses has an expected value of E[N (4)(te)] =∑

c∈C E[N(4)c (te)], and variance V ar[N (4)(te)] =∑

c∈C V ar[N(4)c (te)], cf. Theorem 5, appendix E.3.

6.4 Lower and upperbound

In this section, the lower and/or upper bound for the hypothesis test are determined. Thesebounds are based on the expected number of replacements and its variance for the trainseries, as introduced in subsection 6.3.3. To determine the bounds, the critical value of thestandard normal distribution is used, which is denoted by zα, and has a value such thatP (Z ≤ zα) = 1− α, with Z ∼ Norm(0, 1).

Theorem 4. The approximate lower and upper bound for the correctness check are:

(E[N (4)(te)]− zα/2√V ar[N (4)(te)], E[N (4)(te)] + zα/2

√V ar[N (4)(te)]) (6.8)

Page 58: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 6. HYPOTHESIS TEST FOR SCENARIO 4 48

and, the approximate upper bound for the risk check is:

E[N (4)(te)] + zα

√V ar[N (4)(te)] (6.9)

Proof. Consider N (4)(te), the arrival process of replacements for all positions in a train se-ries, with the component of interest up to time te, as introduced in Lemma 6.3 in subsection6.3.3. To determine a lower and upper bound for the number of replacements for the trainseries, we use the fact that the number of replacements up to te can be approximated by anormal distribution:

N (4)(te) ∼ Norm(E[N (4)(te)], V ar[N (4)(te)]) (6.10)

Firstly, this normalisation is justified by the asymptotic result of a renewal process, cf. The-orem 9, appendix E.3. By this result we normalise the renewal processes per position, i.e.

N(4)c (te) ∼ Norm(E[N

(4)c (te)], V ar[N

(4)c (te)]). Therefore, the total number of replacements

is normally distributed, cf. Theorem 7, appendix E.3. Secondly, we consider typically 30components or more per batch. As a result, the total number of corrective replacementsper batch approximately follows a normal distribution by the regular central limit theorem.

The approximation of the number of replacements by a normal distribution becomes betterwhen E[N (4)(te)] becomes bigger, i.e. |C|, lc, 1/MTTF , and/or (te − t0c) become bigger.No general criterion exists for checking whether the approximation is good. To ensure thatthe lower bound does not become negative, one can consider the following rule of thumb:E[N (4)(te)] > 5, in accordance with what we saw in chapter 3.

Based on this, we can determine a confidence interval for a level of significance, α, usingthe critical value zα of the standard normal distribution.

6.5 Numerical evaluation

After having established the equations for determining the lower and upper bound, theseequations can be numerically evaluated. These equations again are numerically evaluatedusing the theory of Riemann-Stieltjes integration, and enhanced by applying the trapazoidrule. Note that the fact that if a component reaches an age of τ (4), it is replaced preven-

tively, is taken into account by the calculation of p(4)j .

Therefore, the expressions for solving are as follows:

E[N (4)c (t)] ≈ FX(min(t, τ (4))) +

t−δ∑j=δ

E[N (4)c (t− j)]p(4)j + E[N (4)

c (t− τ (4))](1− FX(τ (4))

)V (4)c (t) ≈ FX(min(t, τ (4))) +

t−δ∑j=δ

(2E[N (4)

c (t− j)]p(4)j + V (4)c (t− j)p(4)j

)+ V (4)

c (t− τ (4))(1− FX(τ (4))

)(6.11)

Page 59: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 6. HYPOTHESIS TEST FOR SCENARIO 4 49

p(4)j = P (j − δ < X < j) =

{δ2(fX(j) + fX(j − δ)) j < τ (4)

0 j ≥ τ (4)(6.12)

j = (δ, 2δ, 3δ, ..., t− δ)

These equations are implemented in Matlab, to be able to determine the bounds for specificinstances. The verification of this implementation in Matlab can be found in appendix G.5.The sums are solved by multiplying vectors instead of recursively, for enhancing the com-putation time and thus possible accuracy. As we are discretising the Weibull distribution,the chosen δ is again 3.906e−3, based on the evaluation presented in section 4.7.

6.6 Example

In this section, an example is described, that is used to illustrate the usage of the model inthe hypothesis test.

Example 6.1. Consider a train series which consists of two batches. We consider acomponent in such a train with an assumed mean time to failure of 10 years and b = 2(a = 10/[Γ(1/2 + 1)] = 11.28). The first batch consists of 20 trains, the second batch con-sists of 15 trains. These trains all have two positions at which the component is located.The first batch starts running services at t = 0 and the second batch starts running servicestwo years later. Failed parts are replaced by spare parts. Moreover, an age replacementmaintenance rule applies to these parts. We consider τ (4) = 3. We evaluate the validity ofthe assumption after 6 years with a significance level α = 0.05.

enterThis description results in the following evaluation:

Example 6.1 lc tc0 te Single position Trains of one batch Train series

c = 1 40 0 6 E[N(4)1 (6)] = 0.1382 E[N

(4)1 (6)] = 5.528

E[N (4)(6)] = 7.862V(4)1 (6) = 0.1512

V ar[N(4)1 (6)] = 0.1321 V ar[N

(4)1 (6)] = 5.284

c = 2 30 2 6 E[N(4)2 (6)] = 0.0778 E[N

(4)2 (6)] = 2.334

V ar[N (4)(6)] = 7.576V(4)2 (6) = 0.0825

V ar[N(4)2 (6)] = 0.0764 V ar[N

(4)2 (6)] = 2.292

The bounds are given in table 6.1. These bounds are calculated in the same way as explainedin section 3.6. If the realised number of corrective replacements after 6 years is outside thebounds, it is unlikely that the assumed mean time to failure (MTTF ) and b are valid.

Table 6.1: Hypothesis test example 6.1

Correctness check Risk check

H0 : µ = 7.862 H0 : µ = 7.862H1 : µ 6= 7.862 H1 : µ > 7.862

Lower bound: 2.467Upper bound: 13.256 Upper bound: 12.389

Page 60: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 6. HYPOTHESIS TEST FOR SCENARIO 4 50

6.7 Discussion on performance

As explained in subsection 2.4.3 we determine the performance of the test by consideringa numerical example at several evaluation moments (te). Example 6.1 is the numericalexample considered. The most relevant information of the performance tables is highlightedin this section. Discussed are the correctness of the expected value and variance of thenumber of corrective replacements, the realised significance levels, and the power of thehypothesis test. The complete performance tables can be found in appendix H.4.

6.7.1 Expected value and variance

The analytically determined expected value and variance and the simulated mean andvariance for example 6.1 are given in table 6.2.

Table 6.2: Mean and variance example 6.1 (MTTF = 10, b = 2, τ (3) = 5)

Evaluation moment (te) 2 5 10 15 20

E[N (4)(te)] 1.2431 6.1237 13.8590 22.4660 30.5330V ar[N (4)(te)] 1.2175 5.9648 13.6090 21.7400 29.7230Simulated mean 1.2315 ± 0.0125 6.1171 ± 0.0257 13.8540 ± 0.0169 22.4930 ± 0.0528 30.5530 ± 0.0495Simulated variance 1.1926 ± 0.0279 5.9308 ± 0.0783 13.5720 ± 0.2310 21.8740 ± 0.3711 30.0480 ± 0.5866

We conclude that the analytically determined values are determined correctly, because theanalytically determined values lie within the 95% confidence intervals of the simulatedvalues.

6.7.2 Significance level

The realised significance levels of example 6.1 are given in table 6.3. In appendix H.4, theconfidence intervals of the values are given. The maximum half-width of the confidenceinterval is 0.0025.

Table 6.3: Realised significance levels example 6.1 (MTTF = 10, b = 2, τ (4) = 3)

Evaluation moment (te) 2 5 10 15 20

E[N (4)(te)] 1.2431 6.1237 13.8590 22.4660 30.5330Correctness checkChosen α = 0.05 0.0343 0.0590 0.0401 0.0540 0.0444Chosen α = 0.10 0.0359 0.0991 0.1006 0.0842 0.0988Risk checkChosen α = 0.05 0.0360 0.0472 0.0690 0.0488 0.0542Chosen α = 0.10 0.1272 0.0905 0.1073 0.1024 0.1038

We conclude that the normal approximation is effective for this purpose, and is better forhigher E[N (4)(te)]. Why the realised significance levels are not exactly equal to the chosenα values has been explained in section 3.7.2. In this scenario, more time is necessary for therealised significance levels to lie closer to the chosen α values. This is due to (I) the lowerexpected values, (II) the shape of the true probability distribution, which is approximatedby the symmetric normal distribution.

Page 61: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 6. HYPOTHESIS TEST FOR SCENARIO 4 51

6.7.3 Power

In this subsection, the power values of the correctness check are evaluated. Next, the powerof the correctness check is compared with the power of the risk check. Lastly, the differencebetween the power of scenarios 2 and 4 is discussed. In appendix H.4, the confidence inter-vals of the values are given. The maximum half-width of the confidence interval is 0.0025.In general, we conclude that the power of the hypothesis test is high when considering de-viations in the mean time to failure and shape parameter as presented, especially for lowervalues of the mean time to failure and shape parameter. As these lower values result inmore corrective replacements, this is beneficial from a risk point of view.

A selection of the power values of the correctness check of example 6.1 are shown graphicallyin figure 6.2. In appendix H.4, the confidence intervals of the values are given.

0 2 5 10 15 20Evaluation moment te (year)

0

0.5

1

Pow

er

MTTF = 5, b = 2, , = 0:05MTTF = 8, b = 2, , = 0:05MTTF = 12, b = 2, , = 0:05MTTF = 15, b = 2, , = 0:05

0 10 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 10, b = 1, , = 0:05MTTF = 10, b = 3, , = 0:05

0 2 5 10 15 20Evaluation moment te (year)

0

0.5

1

Pow

er

MTTF = 5, b = 2, , = 0.05MTTF = 5, b = 2, , = 0.10MTTF = 12, b = 2, , = 0.05MTTF = 12, b = 2, , = 0.10

Figure 6.2 – Selection power values for correctness check example 6.1 (MTTF = 10, b =2, τ (4) = 3)

Based on the first and second graph, we conclude that the probability of finding a deviationin the MTTF increases with time, greater deviations, and greater α. This conclusion is inaccordance with what we saw in section 3.7.3.

Based on the third graph, we conclude that the probability of finding a deviation in theshape parameter b increases in time. As opposed to scenario 2, where deviating shape pa-rameters are unlikely to be found, in scenario 4 deviating shape parameters b are likely tobe found. This difference with scenario 2 is due to the fact that we do not consider the longterm behaviour of the Weibull distribution because of the age replacement in this scenario.

In addition to the power of the correctness check, the power of the risk check is evaluatedfor example 6.1. The power values for an α = 0.10 compared to the correctness check areshown graphically in figure 6.3. Considered are power values for detecting a lower MTTFand b, as this is relevant for the risk check.

Based on the graphs in figure 6.3, we conclude that the probability of finding a deviationis higher in the risk check than in the correctness check. This is due to the stricter upper

Page 62: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 6. HYPOTHESIS TEST FOR SCENARIO 4 52

0 2 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 5, b = 2, , = 0:10, correctnessMTTF = 5, b = 2, , = 0:10, riskMTTF = 8, b = 2, , = 0:10, correctnessMTTF = 8, b = 2, , = 0:10, risk

0 5 10 15 20Evaluation moment te (year)

0

0.2

0.4

0.6

0.8

1

Pow

er

MTTF = 10, b = 1, , = 0:10, correctnessMTTF = 10, b = 1, , = 0:10, risk

Figure 6.3 – Power values for correctness versus risk check example 6.1 (MTTF = 10, b =2, τ (4) = 3)

bound. This conclusion applies to both the detection of a deviation in the MTTF and inthe shape parameter b. The hypothesis test has a high power after a few years for thisexample. Thus, this test is very useful when checking whether the risk is not higher thanexpected.

Moreover, the power values of scenarios 2 (example 4.1) and 4 (example 6.1) are compared.In the examples considered, the same failure behaviour is assumed. The relevant powervalues are shown in table 6.4. For te > τ (4) the probability of finding a deviating MTTFis lower in scenario 4 than in scenario 2, while considering the same expected MTTFand deviations. This is explained by the fact that the Weibull distributed lifetimes areinterrupted by the age replacement.

Table 6.4: Power values correctness check scenarios 2 and 4 (MTTF = 10, b = 2, τ (4) =3, α = 0.05)

Evaluation moment (te) 2 5 10 15 20

Example 4.1 MTTF = 8, b = 2 0.1272 0.3586 0.8727 0.9776 0.9981Example 6.1 MTTF = 8, b = 2 0.1286 0.3531 0.4907 0.7101 0.8046

Example 4.1 MTTF = 12, b = 2 0.0109 0.0934 0.4784 0.8171 0.9288Example 6.1 MTTF = 12, b = 2 0.0113 0.0770 0.1537 0.2972 0.3618

Page 63: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

7

Case study: Cab control panel

In this chapter, we demonstrate how the hypothesis test can be executed at NedTrain.Firstly, the component of interest will be described in section 7.1 to introduce the casestudy. Secondly, in section 7.2 the data necessary for applying the methodology will besummarised. Thirdly, in section 7.3 the hypothesis test will be applied for several evaluationmoments. Fourthly, the impact of the hypothesis test for the practice at NedTrain will beshortly discussed in section 7.4. Finally, a conclusion will be drawn in section 7.5.

7.1 Description of component

The component under consideration is the cab control panel (CCP) of the passenger infor-mation system in the SGM train series. This panel allows the driver to control the passengerinformation system. The CCP is located at two positions per train. The maintenance ruleapplied is failure based renewing maintenance. A failure of this component is noncritical.The supplier of the CCP gave an expected constant failure rate. The aim is to check theperformance as promised by the supplier at several moments in time.

7.2 Data

The supplier promised the failure rate of a single component to be: 3.0121e−1 per millionkilometres. Moreover, the supplier assumes a usage of 190000 kilometres per year, whichgives a failure rate per year of: λ = 3.0121e−1

1000000 190000 = 5.7231e−2.

To determine the lower and upper bound for the number of corrective replacements, we needto know the inflow moments of batches and the number of components under considerationper batch. These are given in table 7.1. January 2003 is taken as t = 0, and the unit ofmeasurement is years. How the division of batches and moments of inflow are determined,is explained in appendix I. Moreover, the number of corrective replacements from 2003 to2015 is shown in table 7.2. These are obtained from information systems R5 and Maximoat NedTrain.

Table 7.1: Batch sizes and moments of inflow

Batch Batch size (trains) Quantity per train Batch size (lc) Moment of inflow (t0c) Month of inflow1 15 2 30 1.03 Jan-042 15 2 30 1.87 Nov-043 15 2 30 2.63 Aug-054 15 2 30 3.56 Jun-065 15 2 30 5.62 Aug-086 15 2 30 6.25 Apr-09

53

Page 64: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 7. CASE STUDY: CAB CONTROL PANEL 54

Table 7.2: Corrective replacements 2003 - 2015

Year 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015Corrective replacements 0 2 0 11 7 13 53 49 40 59 49 31 28Cumulative sum 0 2 2 13 20 33 86 135 175 234 283 314 342

7.3 Hypothesis test

Because of the constant failure rate given by the supplier, scenario 1 is considered. Thefailure rate, inflow moments, batch sizes, and level of significance α = 0.05 are the inputvalues for the test. Based on this, the lower and upper bound per evaluation moment isdetermined. For the end of every year we assess whether the number of corrective replace-ments realised lies within these bounds or not. This assessment is shown graphically infigure 7.1.

0  

50  

100  

150  

200  

250  

300  

350  

2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015

Cum

ulat

ive

num

ber

of c

orre

ctiv

e re

plac

emen

ts

Year (end of)

Monitoring corrective replacements of cab control panel

Bounds Expected value Realised value

Figure 7.1 – Visualisation hypothesis test

7.4 Impact

Suppose we would have been able to use the hypothesis test at evaluation moments in thepast. As can be seen in figure 7.1, the cumulative number of corrective replacements didnot deviate up to 2008. From 2009 onwards a deviation is detected.

This signifies that no action would have been taken up to 2008. As the deviation is notwithin the warranty period, the supplier could not have been held financially responsiblefor this deviation. However, investigation would have been initiated, and NedTrain mighthave chosen to alter the maintenance rule or to take this into account when doing businesswith the specific supplier.

Page 65: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 7. CASE STUDY: CAB CONTROL PANEL 55

At NedTrain, investigation was not initiated by the high number of corrective replacements.An initiation by the asset owner in the end of 2010 led to the investigation of the processconsidering updating the passenger information in the system. By this means, also attentionwas paid to the number of corrective replacements. NedTrain not consulted the agreementsmade with the supplier, but the cause for the high number of corrective replacements wasfound in the lack of experience of the mechanics to detect whether the CCP is defective.Therefore, CCPs were replaced unnecessarily. In order to solve this, trainings for the me-chanics were organised. In the near future, a change will be made in the construction ofthe component.

The hypothesis test would have been of added value in this example. Firstly, the developedhypothesis test would have noticed the deviation in the end of 2009, which is one yearearlier and not by coincidence. Secondly, the comparison with the agreements made withthe supplier is new to the company.

7.5 Conclusion

From this chapter is concluded that the monitoring methodology can be used at NedTrainby using the available data. The application of the methodology would be the same forscenarios 2, 3, and 4. Moreover, the methodology can be of added value.

Page 66: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

8

Implementation

The hypothesis test designed in the previous chapters can be used to monitor the validityof assumptions about failure behaviour at NedTrain. Implementation of the methodology isnot considered part of this project, but in this chapter the ideal situation for applying themonitoring methodology is described. In section 8.1, the ideal situation will be described.Some specific implementation steps will be suggested in section 8.2.

8.1 Ideal situation

In the ideal situation a dashboard is used, which shows whether the realised cumulativenumber of corrective replacements of components is as expected. One monitor dashboardper train series should be in place, because the inflow moments and components are specificto a train series. This dashboard provides the following input values for Matlab:

1. Inflow moments of trains;2. Data per component:

• Assumed failure behaviour (mean time to failure and shape parameter b);• Number of positions of the specific component per train (note that the number

of components per train may differ among batches of trains);• Maintenance rule, including maintenance term if applicable;• Cumulative number of corrective replacements up to a specified moment in time;

3. Chosen α value.Based on these input values, Matlab is run, and the output is generated. The output isshown in the same dashboard, and consists of the following elements:

1. Bounds correctness check and risk check;2. Indication of whether the number of corrective replacements is outside the bounds.

enterThis dashboard should be available at any time. Ideally, the Maintenance Engineer andReliability Engineer of the train series are notified automatically as soon as the number ofcorrective replacements lies outside the bounds. At that moment, they should (I) identifythe cause of the deviation, (II) evaluate the possibilities of doing a warranty claim or takinga mitigating action. Possible causes for the deviation can be found in figure 1.1.

In addition to the evaluation at a specific time instant, in-depth analysis is relevant, espe-cially when a deviation is found. A click on the specific component should reveal a graphof the expected and realised number of corrective replacements and the bounds for anycomponent at several time instants in the past. An example of such a graph is shown infigure 7.1.

56

Page 67: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 8. IMPLEMENTATION 57

Prototypes for the dashboard and an in-depth analysis tool are made in Excel. Based onthese prototypes, requirements for future data structures and tooling can be made. Thetool for in-depth analysis also allows NedTrain to generate the tables as constructed in thisthesis. In appendix I.3, a filled out sheet of the tool is shown for obtaining figure 7.1.

8.2 Implementation steps

In the previous section, the ‘to be’ situation was presented. In this section, specific im-plementation steps are suggested. The existence and possibilities of the methodology havebeen presented to users. In order to actually apply the methodology on a large scale, thedata structures and tooling should be further developed. Therefore, the following next stepsare necessary:

Use a unique identifier for each component. The data necessary for the dashboardare found in different documents and informations systems at NedTrain. It shouldbe possible to structure the input data per component automatically in the desiredformat, as thousands of components are in place. Currently, the assumed failure be-haviour, number of positions of a component per train, maintenance rule, and cumu-lative number of corrective replacements can not be linked automatically. The uniqueidentifier per component should be used in each relevant document or informationsystem.

Further develop the tooling. The tooling can be further developed for the data struc-ture of NedTrain, such that minimal effort is required to apply the methodology.

Organise a training with explanation of the tool. Before the tool can be used, it isimportant that users understand how to read the dashboard and interpret the results.A training should make sure that the dashboard is used correctly.

Improve usability of the tooling. When users are going to use the methodology, theyshould have the opportunity to ask questions and to suggest improvements. Thesequestions and suggestions should lead to correct usage and an improved tool.

Moreover, it is recommended to further embed the usage of Weibull lifetime distributionsinstead of Poisson processes, in the process of designing a maintenance program, becauseusage of Weibull lifetime estimations enables making Performance Centered Maintenancedecisions. Ideally, initial estimations are obtained from the supplier. Currently, NedTrain isenhancing their methods for estimating Weibull lifetime distributions. The current lifetimeestimations at NedTrain can be translated to a Weibull distribution. If data about the(censored) lifetimes are available, the parameters can be estimated accurately.

Page 68: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

9

Conclusions & Recommendations

In this chapter, conclusions and recommendations based on this research are given. Theconclusion of the research will be given in section 9.1. In section 9.2, the contribution of thisresearch for both NedTrain and literature will be presented. In section 9.3, recommendationsfor NedTrain and future research will be discussed.

9.1 Conclusion

This research aimed to design a monitoring methodology on component level, to address theuncertainty about whether the failure behaviour of components in the field is as expectedduring the phase in which maintenance programs are designed.

The methodology developed is a hypothesis test, considering the cumulative number of cor-rective replacements of the component of interest. The choice for this parameter is based on:(I) risk focus of the project, (II) relation with all assumptions considered, (III) data avail-ability, and (IV) scarcity of failures. The hypothesis test determines whether the observedcumulative number of corrective replacements is as expected based on (I) the assumptionabout the failure behaviour of the component, and (II) the maintenance rule applied. Anaccurate and fast way to determine the expected value and variance of the cumulative num-ber of corrective replacements is developed for four scenarios. These scenarios representsituations at NedTrain and are defined as follows:

1. Assumption that the arrival process of failures of a component is a Poisson process;2. Assumption that the lifetime of a component follows a Weibull distribution, and failure

based maintenance is applied;3. Assumption that the lifetime of a component follows a Weibull distribution, and block

replacement is applied;4. Assumption that the lifetime of a component follows a Weibull distribution, and age

replacement is applied.enterBy using this expected value and variance, and a normal approximation, a lower and/orupper bound can be determined for the hypothesis test. If the observed cumulative num-ber of corrective replacements lies outside these bounds, the validity of the assumption isquestionable and investigation is required.

58

Page 69: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 9. CONCLUSIONS & RECOMMENDATIONS 59

9.2 Contribution

9.2.1 NedTrain

The designed methodology supports NedTrain in monitoring the validity of assumptionsabout failure behaviour of components. This is desirable in order to meet requirements ofnational and international governments, and meet performance agreements made with theasset owner. The appropriateness of the hypothesis test for data at NedTrain is demon-strated by a case study. NedTrain can use this method to monitor the validity of assumptionson a large scale, as (I) the method is easy and fast to execute and to interpret, and (II)applicable to many components.

9.2.2 Scientific

In this research, methods from the research fields of stochastic operation research and statis-tics are combined. In particular, we develop a hypothesis test for the mean-value function,i.e. the time-dependent mean of a renewal process. Thus, on the one hand we use notionsfrom renewal theory for the determination of the mean and the variance of the number ofcorrective replacements of a certain component in a pre-specified time interval. On the otherhand, these are the parameters of interest for which we would like to create a statisticalhypothesis testing.

To this purpose, we approximate the underlying distribution of the number of correctivereplacements by using appropriately a variation of the central limit theorem for renewalprocesses, see Ross [18], together with the regular central limit theorem. The approxima-tion by the regular central limit theorem is validated due to the rather large number oftrains with the specific component that are taken into account.

In conclusion, this research is a contribution to literature for three reasons. To the best ofour knowledge, the aforementioned combination of the two types of central limit theoremsfor the derivation of a hypothesis test on a time varying parameter, is novel. Moreover, thetheoretical framework created in this report can be easily and directly generalised to othercomponents as well as other lifetime distributions, besides the exponential and the Weibulldistribution. Lastly, we would like to note that this work makes an interesting contributionin the field of maintenance, by solving a practical and very applied problem.

9.3 Recommendations

9.3.1 NedTrain

For NedTrain, two suggestions are given:

A practical overview of different methodologies can catalyse their usage. NedTrainis changing its approach to designing maintenance programs to Performance CenteredMaintenance. In line with this vision, methodologies are designed for, amongst others,designing optimal maintenance programs based on Weibull distributed lifetime esti-mations, and for monitoring component behaviour based on inspection research. Themethodologies, including the one designed in this thesis, apply to different problem

Page 70: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

CHAPTER 9. CONCLUSIONS & RECOMMENDATIONS 60

situations. A decision tree or overall process description could catalyse the (correct)usage of these methodologies.

The methodology designed can be applied in more situations. We explicitly stateto monitor corrective replacements, because these data are always available. However,if the time between relevant events is found to follow an identical and independent ex-ponential or Weibull distribution, the methodology applies to monitoring these eventsas well. Examples could be corrective disruptions, repairs of components, and replace-ments of components under condition based maintenance. The occurrence of theseevents has to be registered to enable monitoring them.

9.3.2 Future research

Two future research directions are identified. For these research directions, examples forthe content are given.

Monitor failure behaviour under condition based maintenance. We considered mon-itoring failure behaviour under usage based maintenance rules. A research is proposedon monitoring the validity of assumptions under condition based maintenance rules.This is relevant, because condition based maintenance is an often applied maintenancestrategy. This maintenance strategy is also applied on a large scale at NedTrain.

It should be determined whether the number of corrective replacements is found arelevant performance indicator for monitoring the failure behaviour of componentsunder condition based maintenance rules. Another event may be found a relevantperformance indicator. Thereafter, it should be researched whether the time betweenthe chosen relevant events follows an identical and independent exponential or Weibulldistribution. Consequently, the methodology designed in this thesis is applicable. Ifthe designed methodology is not applicable, a methodology for monitoring the chosenperformance indicator should be designed.

Take into account influencing factors. We considered monitoring the validity of as-sumptions about failure behaviour based on assumed failure rates and Weibull lifetimedistributions. A research is proposed on monitoring the validity of assumptions aboutthe influence of other factors. This is relevant, because these factors influence the ex-pected and observed behaviour. Moreover, if changes can be attributed to influencingfactors, targeted modifications can be done to influence the causes.

A first example is taking into account different failure modes, which are a result ofseveral degradation mechanisms. Different degradation mechanisms per componentmay be in place. Therefore, a corrective replacement can be a result of more mech-anisms. Just one probability distribution and stationary model may not reflect thefailure behaviour of the component. A second example is taking into account dif-ferent loads on components. Some trains of a series may be used more than othertrains. Therefore, the expected number of corrective replacements per time unit canbe different per train.

Page 71: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

A

List of definitions

Age replacement: a usage based maintenance strategy under which an individual part isreplaced preventively after it has been used for a fixed amount of usage (e.g. time orkilometres), or correctively if it fails before this time.

Block replacement: a usage based maintenance strategy under which an individual partis replaced preventively after the traceable unit it belongs to has been used for a fixedamount of usage (e.g. time or kilometres) independent of the age of the individualpart, and correctively if the individual part fails before this time.

Batch of trains: a group of trains which started running services at the same moment.

Component: an exchangeable element of a subsystem that fulfils specific sub functions,as a contribution to realising the functions of one or more subsystems

Condition based maintenance strategy: a maintenance strategy under which the con-dition of a part is measured and maintenance is conducted when a certain thresholdlevel has been reached.

Consumables: type of parts which are discarded when they fail.

Corrective maintenance: maintenance activity which is a result of a failed part.

Evaluation moment: the pre-specified point in time at which the hypothesis is tested.

Failure: the unforeseen transition from available to non-available state.

Failure based maintenance: maintenance strategy under which parts are only replacedafter they have failed.

Failure behaviour: lifetime of a component or number of failures of a component in aperiod.

Field data: observations of the failure behaviour when a system is in operation.

Field reliability: the reliability of a system in operation.

Fleet: a collection of train series.

Hypothesis test: statistical test which sets up a specific statement regarding a parameterof interest, and assesses the plausibility of the specific hypothesis by seeing whetherthe observed data support or refute that hypothesis, see Garthwaite [12].

61

Page 72: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX A. LIST OF DEFINITIONS 62

Line replaceable units: parts which are removed as a whole from their positions in thetrain.

Main components: type of parts which are traceable by a serial number and are revisedin a repair shop or a maintenance depot when they fail.

Maintenance: all those activities intended to keep or return a capital good to a conditionrequired to fulfil its function, conform Smit [20].

Maintenance program: a document that specifies when and which maintenance activi-ties have to be carried out during the life cycle of a train.

Maintenance rule: a specific guideline for setting the moment of execution of a mainte-nance activity.

Moment of inflow: the moment at which a train starts running services.

Monitoring: viewing whether the subject at hand works as assumed.

Overmaintenance: maintenance execution under which the service life of parts is onlypartially utilised, see Tinga [22], and the downtime of the train is unnecessarily high.

Part: the physical realisation of a component.

Performance: at NedTrain the term for the combination of safety, reliability, availability,costs of repair, quality and image, and environment.

Position: The physical place in the train at which a component is located.

Preventive maintenance: maintenance activity which is executed to prevent a failure.

Reliability: the likelihood of a technical failure of an element in a defined operating period,conform Smit[20].

Renewal process: a counting process in which the times between successive events areindependent and identically distributed, with an arbitrary distribution, see Ross [18].

Repairables: type of parts which are revised at a repair shop when they fail.

Replacement: maintenance activity in which a failed part is taken out of the system, andsubstituted by a spare part.

System: a combination of mutually-dependent subsystems, capable of fulfilling specifiedfunctions in a given environment.

Subsystem: a combination of elements that fulfils specified functions, as part of the real-isation of one or more functions at system level.

Time in the field: the period a train or component is running services.

Traceable component: component which can be tracked by its serial number, or by theserial number of a traceable unit it belongs to, if only one position for a componentper serial number is in place.

Page 73: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX A. LIST OF DEFINITIONS 63

Traceable unit: train or main component which can be tracked by its unique serial num-ber.

Train: a series of connected railway carriages moved by a locomotive or by integral motors,i.e. ‘treinstel’.

Train series: a collection of trains of the same type, i.e. trains of the same construction.

Usage based maintenance strategy: a maintenance strategy under which the total us-age of a part is measured and maintenance is conducted when a certain threshold levelhas been reached.

Page 74: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

B

Overview of notation

fX(·) Probability distribution function of random variable X, fX(t) = dFX(t)/dt.FX(·) Cumulative distribution function of random variable X, FX(t) = P (X ≤ t).P (·) Probability of argument occurringE[·] Expected value of argument

V ar[·] Variance of argumentµ Cumulative number of corrective replacements

∼Weibull(a, b) Weibull distributed with scale parameter a and shape parameter b,a = MTTF/[Γ(1/b+ 1)]

MTTF Mean time to failure∼ Norm(µ, σ2) Normal distributed with mean µ and variance σ2

∼ Poisson(λ) Poisson distributed with rate λΓ(·) Gamma functiona Scale parameter of Weibull distributionα Level of significance, probability of incorrectly rejecting a valid H0 hypothesisb Shape parameter of Weibull distributionβ Probability of incorrectly accepting a H0 hypothesis

1− β Power, probability of correctly rejecting a H0 hypothesisc Batch, c ∈ CC Set of batches, i.e. train series. C = {1, 2, ..}

Ci,c(t) Arrival process of replacements of a component at a specific single position iof batch c ∈ C without maintenance execution in scenario 3

Cc(t) Arrival process of replacements of a component at one single positionof batch c ∈ C without maintenance execution in scenario 3

Dc(t) Second moment of the number of replacements of a component up to tat one single position of batch c ∈ C without maintenance execution

in scenario 3, i.e. E[(Cc)2

(t)]δ Step size for discretisation of Weibull distributioni Number of specific positionlc Number of positions with component under consideration in batch cλ Arrival rate of replacements for one single position

N(m)i,c (t) Arrival process of replacements of a component at a specific single position i

of batch c ∈ C up to time t, in scenario m

N(m)c (t) Arrival process of replacements of a component at one single position

of batch c ∈ C, up to time t, in scenario m

64

Page 75: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX B. OVERVIEW OF NOTATION 65

N(m)c (t) Arrival process of replacements of a component at

lc positions of batch c ∈ C, up to time t, in scenario m

N (m)(t) Arrival process of replacements of a component at all positionsof one train series up to time t, in scenario m

p(m)j P (j − δ < X < j) for scenario m

t0c The time at which batch c ∈ C starts running servicestc Time in the field of batch c ∈ Ctac Age of process after last preventive replacement

under a block replacement maintenance rulete Evaluation moment

τ (3) Block replacement preventive maintenance term, in terms of time

τ (4) Age replacement preventive maintenance term, in terms of time

V(m)c (t) Second moment of the number of replacements of a component up to t

at one single position of batch c ∈ C, i.e. E[(N

(m)c

)2(t)]

up to time t, in scenario mXn Random variable denoting lifetime in scenarios 2, 3, and 4zα Critical value: P (Z ≤ zα) = 1− α, with Z ∼ Norm(0, 1)

Page 76: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

C

Analysis fleet of NedTrain

In this appendix, the analysis of the size of the fleet of NedTrain and the number of com-ponents per train is given, in order to give insight into the context of the problem. Insection the fleet is decomposed in the different train series and its batches. In section C.2the number of components per train for a train series is shown.

C.1 Fleet

Table C.1 gives the size of the different train series and the batches. This table is based onan overview of the fleet of Configuration Management of NedTrain (‘Overzicht Materieel’)of January 2016.

Table C.1: Number of trains is the fleet of NedTrain

Train series Number of trains Batches (‘deelseries’) Number of trains

SLT 131 SLT-1 35SLT-2 64SLT-3 32

MAT64 31 V4, V5, V6 28V12, V13 3

SGM 90 SGMm-0 15SGMm-1 - II 15SGMm-1 - III 45SGMm-2 15

ICMm 137 ICMm - 1 40ICMm - 2 47ICMm - 3 30ICMm - 4 20

GTW 10 GTW2-6 2GTW2-8 8

Thalys 2

ICE 3

VIRM 176 VIRM-1 81VIRM-2/3 44VIRM-4 51

DDM 68 DD-AR 18DDZ 50

66

Page 77: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX C. ANALYSIS FLEET OF NEDTRAIN 67

C.2 Number of components per train

Table C.2 gives insight into the proportion of components that is found with a certainnumber per train. This analysis is done for the SGM train series. The first column showsthe number per train, the second column the absolute frequency, and the third column therelative frequency. This table is based on an overview of the number of components pertrain of SGM (‘IV lijst’). The usage of this overview implies that main components are nottaken into account.

Table C.2: Number of components per train (SGM)

# Frequency Percentage

1 421 19.29%2 1013 46.43%3 44 2.02%4 249 11.41%5 3 0.14%6 92 4.22%7 8 0.37%8 58 2.66%9 10 0.46%10 6 0.27%12 96 4.40%13 2 0.09%14 4 0.18%16 9 0.41%18 75 3.44%20 3 0.14%24 22 1.01%26 6 0.27%28 5 0.23%32 2 0.09%36 28 1.28%52 2 0.09%58 10 0.46%80 6 0.27%82 2 0.09%128 6 0.27%

Page 78: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

D

Degradation Model

In this appendix, the degradation model given in section 1.1.2 is discussed in more detail.The field reliability of a train is influenced by numerous factors. These factors are abstractlydrawn in figure D.1. Firstly, the relation between replacements and system performanceis explained in section D.1. Secondly, the relation between the lifetime distribution anddegradation mechanisms will be explained in section D.2. Thirdly, the relation betweendegradation mechanisms and influencing factors will be explained in section D.3. Finally, aconclusion will be drawn in section D.4.

D.1 Replacements and system performance

Optimal system performance is accomplished by ensuring a required safety level, and balanc-ing reliability, availability, costs of repair, quality and image, and environment. If a failureinfluences the system performance more than desired, a preventive maintenance activityis prescribed, e.g. because of losses for repair costs or safety. Corrective and preventivereplacements influence the system performance for example by costs of repair, or time torepair.

D.2 Lifetime distribution and degradation mechanisms

A failure is equivalent to component reaching the end of its lifetime. The moment at whichthe component reaches the end of its lifetime, is often unknown in advance. This maybe described by a probability distribution, giving a probability of a component reaching acertain age in a unit of measurement.

The lifetime distribution is a result of the degradation mechanisms in place. The declinefrom full condition to failed condition, is called a degradation mechanism. When the con-dition reaches the technical limit for the component, a failure occurs. The full condition isalso called the as-new condition. The condition is a performance measure for a componentthat assesses the degree to which a component meets quality requirements.

A degradation mechanism is mapped by relating the condition in a certain unit of measure-ment, with a cause of a failure in another unit of measurement. We thus relate a dependentvariable in a certain unit of measurement, with an independent variable in another unit ofmeasurement, e.g. we relate thickness in millimetres with distance of component in servicein kilometres. The unit of measurement of the independent variable, representative for acause of a failure mechanism, is the Maintenance Relevant Unit (MRU). The MRU is a

68

Page 79: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX D. DEGRADATION MODEL 69

Figure D.1 – Factors influencing failure behaviour of a component

measurement for the use of a system element. Typically used are calendar time and kilo-metrage. As a result the degradation mechanism can be described by a distribution withcertain characteristics. Often more than one degradation mechanism is in place.

D.3 Degradation mechanisms and influencers

Degradation mechanisms are influenced by:

Properties of the component. Properties of a component are for example its material,geometry, and the initial condition. This is determined by the supplier of the compo-nent and his manufacturing process.

Effectivity of the maintenance executed during the life cycle of the component.The maintenance executed during the life cycle tries to slow down degradation mech-anisms, e.g. greasing a component, which diminishes friction. The effectivity of themaintenance depends on the task and the quality of execution.

The load on the component. The load which a component faces, influences the degra-dation mechanisms. A difference can be made between single load and aggregated

Page 80: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX D. DEGRADATION MODEL 70

load. Following Fan et al [10]: overstress failure arises as a result of a single loadcondition, which exceeds the threshold of a strength property of a component. Wear-out failure occurs as a result of cumulative damage related to loads applied over anextended period of time, e.g. loads are kilometres in service, and weight of passengers.

Moreover, environmental characteristics influence all those processes and relationships, con-form Ebeling [9]. Factors are for example weather and the system in which the componentoperates. These factors can influence for example the relation between loads faced by thecomponent and the degradation mechanisms, e.g. the load on a component has an increasinginfluence, if a redundant component in the subsystem failed.

D.4 Conclusion

From this chapter we conclude that the lifetime distribution is a result of numerous inter-acting factors. It is important to realise that there is not only one mechanism causing theobserved replacements, for the interpretation of the data and result of the hypothesis test.

Page 81: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

E

Theoretical background

In this appendix, we give relevant theoretical background knowledge we assume to be knownby the reader of the master thesis. Firstly, in section E.1 background knowledge consideringhypothesis testing is summarised. Secondly, in section E.2 a formal definition of a renewalprocess is given. Finally, in section E.3 general theorems from the statistics field are given.

E.1 Hypothesis testing

E.1.1 Hypothesis formulation

Hypothesis testing sets up specific hypotheses regarding the parameters of interest and as-sesses the plausibility of any specific hypothesis by seeing whether the observed data supportor refute that hypothesis, see Garthwaite et al [12].

According to Montgomery & Runger [16], a statistical hypothesis is a statement about theparameters of one or more populations. A procedure leading to a decision about a particularhypothesis is called a test of a hypothesis. Hypothesis testing relies on using the informa-tion in a random sample from the population of interest. If this information is consistentwith the hypothesis, we will not reject the hypothesis. However, if this information is notconsistent with the hypothesis we will conclude that the hypothesis is false. It is importantto remember that hypotheses are always statements about the population or distributionunder study, not statements about the sample.

In principal, hypothesis testing considers a null hypothesis and an alternative hypothesis. Ahypothesis is a statement about a parameter of a population. Example: number of failuresin a certain period. The null hypothesis, denoted H0, is the statement about the parameterof interest that is assumed to be true unless there is convincing evidence to the contrary.The alternative hypothesis, denoted H1, is a statement about the parameter of interestthat is contradictory to the null hypothesis. A null hypothesis can be accepted or rejected,indicating the likeliness of being a true or false statement. Rejection only takes place ifthere is convincing evidence in favour of H1. Before explaining when to accept or reject ahypothesis, the formulation of hypotheses is explained.

The formal formulation of H0 and H1 consists of several parts, which have to be deter-mined. Now, an example is evaluated in order to explain which parts are to be determined.Thereafter, per part the considerations are explained.

• H0: λ = 20• H1: λ 6= 20

71

Page 82: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX E. THEORETICAL BACKGROUND 72

This formulation consists of the following parts:• Parameter of interest. Example: λ;• The tested value of the parameter of interest. Example: 50 ;• The sign of H1. Example: 6=.

enterThe hypothesis will usually involve one or more parameters of the hypothesised distribu-tion. The parameter of interest depends on the characteristic we want to know about thepopulation.

The tested value of the parameter of interest is usually based on one of the following sources,cf. Montgomery & Runger [16]:

• Past experience or knowledge of the process, or even from previous tests or experi-ments: the aim is often to determine whether the parameter value has changed;

• Some theory or model regarding the process under study: the aim is to verify thetheory or model;

• Result from external considerations, such as design or engineering specifications, orfrom contractual obligations: the aim is conformance testing.

enterThe sign of H1 depends on the choice of a two-sided alternative hypothesis or a one-sidedalternative hypothesis. This choice depends on the aim of the test. If the objective is tomake a claim involving statements such as greater than, less than, superior to, exceeds,at least, and so forth, a one-sided alternative is appropriate. If no direction is implied bythe claim, or if the claim ‘not equal to’ is to be made, a two-sided alternative should be used.

In case of a two-sided alternative hypothesis, the sign will be the not equal one ( 6=). In theother case, the sign will be either > or <. This choice also depends on the aim of the test.For reasons to be explained later in this section, rejecting H0 is a strong conclusion. Wethus should put the statement about which it is important to make a strong conclusion inthe alternative hypothesis. Now an example is given.

Imagine a maximum of 10 replacements to be allowed in a certain period. If we are quiteconvinced about our knowledge of 10 replacements in a certain period and no severe conse-quences are in place, H1 may be formulated as: H1: λ > 10. Consequently there should bestrong evidence to conclude that the number of replacements is more than we think, andaction is required. It would be satisfactory to accept H0, which is no strong conclusion, butno evidence is found to conclude the contrary. If we want to test that this maximum is veryunlikely to be exceeded, H1 is formulated as: H1: λ < 10. We then want to reject H0 andhave strong evidence that the number of replacements is less than the stated 10.

E.1.2 General hypothesis testing

Given the formulation of a hypothesis, we have to test this based on a sample. Testing ahypothesis involves taking a random sample, computing a test statistic from the sampledata, and then using the test statistic to make a decision about the null hypothesis. Asample is said to be random if it is selected in such a way that every possible sample hasthe same probability of being selected.

Page 83: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX E. THEORETICAL BACKGROUND 73

The test statistic is the value the parameter of interest has in the sample. This value pro-vides the basis for testing a statistical hypothesis. The value is part of the sample space,which is the set of all possible outcomes of a random experiment, cf. [16]. It is very unlikelythat the test statistic has the specifically chosen value of the population parameter, evenif H0 is true. Thus, we have to make a decision about the likelihood of H0 being truebased on this deviating test statistic. The difference between the value of the test statisticand the chosen value of the population parameter may or may not reflect a true difference.Assessing the likelihood requires specification of the probability of occurrence of an event,see Blischke et al [5].

Making a decision about the likelihood of the null hypothesis is done by accepting or re-jecting it. This should be done with in mind the probability of reaching a wrong conclusionabout the population, making an error. We may pick a sample which not represents thepopulation. Two types of errors are explained, cf. Montgomery & Runger [16]:

• Type I error: rejecting the null hypothesis when it is a true statement about thepopulation. The probability of this event to happen is denoted by α, and also calledsignificance level, alfa error or size of the test;

• Type II error: failing to reject the null hypothesis when it is a false statement aboutthe population. The probability of this event to happen is denoted by β.

enterNote that these probabilities are conditional probabilities: α is the chance of rejecting thenull hypothesis, if H0 is true; β is the chance of accepting the null hypothesis, if H0 is nottrue.

Typically, in the hypothesis-testing procedure a certain maximum probability of makingan error type I is taken. This choice determines the critical region of a test. The criticalregion is the portion of the sample space of a test statistic that will lead to rejection ofthe null hypothesis, also called rejection region. This region is bounded by a critical value,which depends on the chosen level of significance. In order to control this specific levelof significance, and thus the type I error, the sign of H0 is always an equality sign (=).Controlling this is done by choosing a value for α.

The probability of a type II error is not a constant, but depends on the true value of thepopulation parameter and the sample size, see [16]. However, this true value is unknown.Thus, in order to calculate β we have to assume a specific value for the population parameter,which is different from the value mentioned in H0. Consequently, the condition ‘H0 is nottrue’ is explicit, and probabilities can be calculated.

E.1.3 Power of a test

An important concept is the power of a statistical test, which uses β. This is the probabilityof rejecting the null hypothesis when the alternative hypothesis is a true statement aboutthe population. This is computed as 1 − β, and can be interpreted as the probability ofcorrectly rejecting a false null hypothesis. We often compare statistical tests by comparingtheir power properties. It is a very descriptive and concise measure of the sensitivity of astatistical test: the ability of the test to detect differences.

Page 84: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX E. THEORETICAL BACKGROUND 74

Since the probability of wrongly rejecting H0 can be controlled, we think of rejection of thenull hypothesis as a strong conclusion. The probability of wrongly accepting H0, β, cannot be controlled. That is why we think of accepting H0 as a weak conclusion, unless βis acceptably small and thus the power high. Instead of saying accepting H0 we prefer tosay failing to reject H0. Failing to reject does not necessarily mean that there is a highprobability that H0 is true, but may simply mean that more data are required to reach astrong conclusion.

As we can control the probability of wrongly rejecting H0 by setting an α, we may chooseto set this value as low as possible. Choosing a smaller α will lead to a smaller criticalregion, so more evidence is required to reject H0 and the probability of wrongly rejectingH0 is smaller. However, when setting a smaller alpha level a higher likelihood of making aType II error comes in place. Think about it as setting a stricter criterion for rejecting H0.Consequently, it is more likely to not reject H0 when you should, making a type II error.

E.1.4 Conclusion of a test

Given a critical region and test statistic, we can make a decision about the null hypothesis.If the test statistic lies in the critical region, H0 is rejected and H1 accepted. Otherwise,we fail to reject H0 and thus H0 is accepted. When rejecting H0 we are convinced that thedifference between the test statistic and the value of the population parameter indicates atrue deviation from the value of the population parameter in the population.

E.1.5 Confidence interval

Another way of describing the test is specifying an appropriate confidence interval. This isan interval estimate of the population parameter. We can not be certain that the intervalcontains the true, unknown parameter, the interval is constructed so that we have highconfidence that it does contain the unknown parameter of interest.

The end-points of the confidence intervals, l and u, are based on the following probabilitystatement: P (L ≤ λ ≤ U) = 1 − α. This indicates that there is a probability of 1 − α ofselecting a sample for which the confidence interval will contain the true value of λ. It istempting to assume that the true value of the population parameter is within the intervalwith probability (1 − α). However, when basing the confidence interval on a sample, theend-points of the interval are random variables depending on the sample as indicated by theprobability statement. The correct interpretation is saying that 100(1−α)% of the sampleswill include the true value of the unknown population parameter.

There is a close relationship between the test of a hypothesis about a parameter, and theconfidence interval of the parameter. H0 will be rejected if the test statistic is not in the100(1− α)% confidence interval.

The insights however are a bit different. Confidence intervals provide a range of likelyvalues for the parameter at a stated confidence level. Whereas hypothesis testing is aneasy framework for displaying the risk levels, such as the P -value, associated with a specificdecision. [16]

Page 85: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX E. THEORETICAL BACKGROUND 75

E.2 Definition of a renewal process

For this definition we follow Ross [18]. Let {N(t), t ≥ 0} be a counting process and let Xn

denote the time between the (n− 1)st and nth event of this process, n ≥ 1. If the sequenceof nonnegative random variables {X1, X2, ...} is independent and identically distributed,the counting process {N(t), t ≥ 0} is said to be a renewal process.

E.3 List of theorems

Theorem 5. Let Q and R be independent random variables, then:

E(Q+R) = E(Q) + E(R)

V ar(Q+R) = V ar(Q) + V ar(R)(E.1)

Theorem 6. Let A be an event, and {Bi}1 ≤ i ≤ n is its partition. Then:

P (A) =n∑i

P (A|Bi)P (Bi) (E.2)

Theorem 7. Let Q and R be continuous random variables following a Normal distribution:

Q ∼ Norm(µQ, σ2Q)

R ∼ Norm(µR, σ2R)

(E.3)

Let Q and R be independent, then:

Q+R ∼ Norm(µQ + µR, σ2Q + σ2R) (E.4)

Theorem 8. Let Q and R be discrete random variables following a Poisson distribution:

Q ∼ Poisson(λQ)

R ∼ Poisson(λR)(E.5)

Let Q and R be independent, then:

Q+R ∼ Poisson(λQ + λR) (E.6)

Theorem 9. For large t, N(t) is approximately normally distributed with mean t/µ andvariance tσ2/µ3. Where µ and σ are respectively the mean and variance of the inter arrivaldistribution, cf. Ross [18].

limt→∞

P (N(t)− t/µ√

tσ2/µ3< x) =

1√2π

∫ x

−∞e−x

2/2dx

limt→∞

V ar(N(t))

t= σ2/µ3

(E.7)

Page 86: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

F

Mathematical derivations

In this appendix, the mathematical derivations of the first and second moments of therenewal processes used in the models are demonstrated. In section F.1 the derivations fora Weibull distributed time between renewals are presented. In section F.2 the derivationsfor a renewal process under age replacement are given.

F.1 First and second moment renewal process

Let N(t) be a renewal process and V (t) = E[N2(t)]. Renewals considered are correctivereplacements.

E[N(t)] =

∫ ∞0

E[N(t)|X1 = x]fX(x)dx

=

∫ t

0E[N(t)|X1 = x]fX(x)dx

=

∫ t

0(1 + E[N(t− x)])fX(x)dx

= FX(t) +

∫ t

0E[N(t− x)]fX(x)dx

(F.1)

V (t) =

∫ t

0E[N2(t)|X1 = x]fX(x)dx

=

∫ t

0(1 + E[N(t− x)])2fX(x)dx

=

∫ t

0(E[N2(t− x)] + 2E[N(t− x)] + 1)fX(x)dx

= FX(t) + 2

∫ t

0E[N(t− x)]fX(x)dx+

∫ t

0V (t− x)fX(x)dx

(F.2)

For equation (F.1) we follow Arts [4]. This equation is the renewal equation. For the secondmoment we use the total law of probability (Theorem 6, appendix E.3) and the definitionof a renewal process.

76

Page 87: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX F. MATHEMATICAL DERIVATIONS 77

F.2 First and second moment age replacement

Let N(t) be a renewal process considering the corrective replacements under age replace-ment and V (t) = E[N2(t)]. Both corrective and preventive replacements result in an ‘asnew’ part. However, we only count the corrective replacements. In these derivations thetotal law of probability (Theorem 6, appendix E.3) and the definition of a renewal processare used.

If t ≥ τ (4):

E[N(t)] =

∫ ∞0

E[N(t)|X1 = x]fX(x)dx

=

∫ τ (4)

0E[N(t)|X1 = x]fX(x)dx+

∫ ∞τ (4)

E[N(t)|X1 = x]fX(x)dx

=

∫ τ (4)

0(1 + E[N(t− x)])fX(x)dx+

∫ ∞τ (4)

(0 + E[N(t− τ (4))])fX(x)dx

= FX(τ (4)) +

∫ τ (4)

0E[N(t− x)]fX(x)dx+ E[N(t− τ (4))]

(1− FX(τ (4))

)(F.3)

If t < τ (4), E[N(t−τ (4))] is zero by definition of a renewal process. In this case, the expectednumber of corrective replacements is equal to equation (F.1). Consequently, the recursiveequations for E[N(t)] are as follows:

E[N(t)] =

{FX(t) +

∫ t0 E[N(t− x)]fX(x)dx t < τ (4)

FX(τ (4)) +∫ τ (4)0 E[N(t− x)]fX(x)dx+ E[N(t− τ (4))]

(1− FX(τ (4))

)t ≥ τ (4)

(F.4)If t ≥ τ (4):

V (t) =

∫ ∞0

E[N2(t)|X1 = x]fX(x)dx

=

∫ τ (4)

0E[(1 +N(t− x))2]fX(x)dx+

∫ ∞τ (4)

E[(0 +N(t− τ (4)))2]fX(x)dx

=

∫ τ (4)

0E[(N2(t− x) + 2N(t− x) + 1)]fX(x)dx+

∫ ∞τ (4)

E[N2(t− τ (4))]fX(x)dx

= FX(τ (4)) + 2

∫ τ (4)

0E[N(t− x)]fX(x)dx+

∫ τ (4)

0V (t− x)fX(x)dx

+ V (t− τ (4))(1− FX(τ (4))

)(F.5)

If t < τ (4), V (t − τ (4)) is zero by definition of a renewal process. In this case, the secondmoment is equal to equation (F.2). Consequently, the recursive equations for V (t) are asfollows:

V (t) =

FX(t) + 2

∫ t0 E[N(t− x)]fX(x)dx+

∫ t0 V (t− x)fX(x)dx t < τ (4)

FX(τ (4)) + 2∫ τ (4)0 E[N(t− x)]fX(x)dx+

∫ τ (4)0 V (t− x)fX(x)dx t ≥ τ (4)

+V (t− τ (4))(1− FX(τ (4))

) (F.6)

Page 88: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

G

Matlab implementation

In this appendix, the implementation in Matlab is found to give reliable results. Firstly, thesteps of the simulation will be given in section G.1. Secondly, the method for determiningthe number of replications will be introduced in section G.2. Thirdly, the verification ofthe simulation will be given in section G.3. Fourthly, in section G.4 an explanation willbe given about the method to determine the significance level and power of the hypothesistests. Finally, the implementation of the hypothesis test will be verified in section G.5.

G.1 Steps

To determine the accuracy of the analytically determined mean, variance and lower and/orupper bounds, a Monte Carlo simulation of the arrival processes in the different scenariosis done. In figures G.1, G.2, and G.3 the steps of the simulations are shown graphically.For scenarios 1 and 2 the same simulation model applies, where for scenario 1 we set shapeparameter b = 1. In every scenario, scale parameter a is calculated for a given mean timeto failure, X, and b, using the following formula: a = X/[Γ(1/(b+ 1))].

G.2 Number of replications

A replication is a repeated execution of a simulation run. A decision has to be made consid-ering the number of replications, to make sure that the outcome variables are reliable. Thisdecision has to be made as we use random numbers. Given the nature of random numbers,conclusions can not be drawn based on a single replication. In this section, we introducethe general method of determining the number of replications necessary. This method isused in the Monte Carlo simulation to determine the number of replications.

The number of replication is determined by drawing a confidence interval of the outcomevariable for a number of replications, and deciding whether the width of this interval issmall enough. As we do not know the distribution of the outcome variable, the confidenceinterval is based on the t-distribution.

The half-width of the confidence interval considering outcome variable Y is determined asfollows:

t(1−α/2,n−1)√s2/n (G.1)

78

Page 89: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX G. MATLAB IMPLEMENTATION 79

Figure G.1 – Steps in simulation scenarios 1 and 2

wheret = t distribution’s (1− α/2)th percentile for n− 1 degrees of freedoms2 = variance of the mean of outcome variable Y for n replicationsn = number of replications used to calculate the summary statistic

enter

This value is first calculated for the means of the outcome variable for 10 replications, asproposed by Chung [6]. The chosen maximum half-width depends on the outcome variableto be determined. The half-width of the confidence interval is mentioned for simulatedvalues, and shown by the ± sign in the performance tables in appendix H.

To determine the α and 1− β values, a specific methodology is followed. The simulation ofthe arrival process is used for determining whether the simulated observed value lies in thecritical region. The output is a boolean. We are interested in the mean of these booleans,which gives the probability of an observed value not lying in the critical region. If the valuesof these booleans are all the same, the mean will be either one or zero. The probability ofthis happening follows a binomial distribution. The probability of a boolean being one orzero, depends on the test we are doing, and we are actually determining this probability.

Page 90: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX G. MATLAB IMPLEMENTATION 80

16/0

3/16

14:

07Pr

evie

w

Page

1 o

f 1

t+X

<τ(3)

Yes

t=t+

X

Num

ber o

f co

rrec

tive

rem

oval

s +

1N

umbe

r of

prev

entiv

ere

mov

als

+ 1

No

Num

ber o

f bl

ocks

=

⌊/

⌋t c

τ(3)

B≤⌊

/⌋

t cτ(3)

Dra

w ti

me

to fa

ilure

X∼W

eibu

ll(a,b)

Yes

B+1

t=0

No

−⌊/

⌋>0

t ct c

τ(3)τ(3)

Dra

w ti

me

to fa

ilure

X∼W

eibu

ll(a,b)

(t+X)

≤(−⌊

/⌋

)t c

t cτ(3)

τ(3)

t=t+

X

Num

ber o

f co

rrec

tive

rem

oval

s +

1

Yes

Yes

No

Out

put:

Num

ber o

fco

rrec

tive

rem

oval

s

B =

1

No

t=0

Figure G.2 – Steps in simulation scenarios 3: block replacement

Page 91: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX G. MATLAB IMPLEMENTATION 81

16/03/16 14:06Preview

Page 1 of 1

t < tcYes

Draw time to failure

X ∼ Weibull(a, b)

X < τ(4)

Yes

t = t + X

Number of correctiveremovals + 1

No

No

Output: Number ofcorrective removals

t = 0

t = t + τ(4)

Number of preventiveremovals + 1

t ≤ tc

Yes

t ≤ tc

Yes

No No

Figure G.3 – Steps in simulation scenario 4: age replacement

Page 92: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX G. MATLAB IMPLEMENTATION 82

To determine the significance level the probability of lying within the bounds should be(1−α). This is an extreme case. Based on this, we determine that 5,000 runs to determineone probability value should be enough. The probability for all booleans having the samevalue then reduces to 1e−6. Thus, 10 runs with 5,000 replications of the arrival processare initially done to determine the half-width of the confidence interval of the summarystatistic. If the desired half-width is not reached yet, another run of 5,000 replications isdone to determine an extra probability value and new summary statistic. This procedureis repeated up till the desired half-width is reached.

G.3 Verification simulation

1. For scenarios 1 and 2, we check whether the mean and variance of the simulated num-ber of corrective replacements for b = 1, are equal to those of a Poisson distributionLet us simulate 30 trains with one single component, starting running services at t = 0.The failure rate of the component λ = 1/10, i.e. MTTF = 10 and b = 1. Hence,the mean number of corrective replacements up to moment t should be: 30 ∗ 1/10 ∗ t.The variance is equal to this mean. The number of replications is based on the 95%confidence interval of the mean. If the half-width is ≤ 0.025, we stop replicating. Weobtain the following results:

te: 2 5 10 20

Mean 5.9925 14.9877 30.0080 59.9902Variance 6.0349 14.9296 29.8422 60.0604

As can be seen, the values 6, 15, 30 and 60 lie in the 95% confidence interval.

2. For scenarios 3 and 4, we choose a b = 1 and check whether the mean and varianceof the simulated number of corrective replacements are equal to those of a Poissondistribution Let us simulate 30 trains with one single component, starting runningservices at t = 0. The lifetime of the component follows a Weibull distribution withMTTF = 10 and b = 1, and the maintenance term is τ (3) = τ (4) = 2. As a result themean number of corrective replacements up to moment t follows a Poisson distribution,being: 30 ∗ 1/10 ∗ t. The variance is equal to this mean. The number of replicationsis based on the 95% confidence interval of the mean. If the half-width is ≤ 0.025, westop replicating. We obtain the following results:

te: 2 5 10 20

BlockMean 6.0168 14.9865 29.9938 59.9995Variance 5.8109 15.0246 29.644 59.9969

AgeMean 5.9917 14.9874 30.0197 59.816Variance 6.1415 15.0307 29.5322 59.9647

As can be seen, the expected values by a Poisson distribution, 6, 15, 30 and 60, lie inthe 95% confidence interval. Moreover, no difference is seen in the values for the twoscenarios, as expected with a constant failure rate.

Page 93: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX G. MATLAB IMPLEMENTATION 83

3. For scenarios 3 and 4, we choose b = 2 (increasing failure rate) and check whether themean and variance of the simulated number of corrective replacements is lower thanfor b = 1, with equal MTTF . Let us simulate 30 trains with one single component,starting running services at t = 0. The lifetime of the component follows a Weibulldistribution with MTTF = 10 and b = 2, and the maintenance term τ (3) = τ (4) = 2.The number of replications is based on the 95% confidence interval of the mean. Ifthe half-width is ≤ 0.025, we stop replicating. We obtain the following results:

te: 2 5 10 20

Block Mean 0.9191 2.1211 4.6397 9.3171Age Mean 0.9510 2.0989 4.6619 9.3295

As can be seen, the values are lower than for b = 1.

4. For scenarios 3 and 4, we choose a b > 1 (increasing failure rate) and two valuesfor τ (m). We check whether the number of corrective replacements is higher for ahigher τ (m). Let us simulate 30 trains with one single component, starting runningservices at t = 0. The lifetime of the component follows a Weibull distribution withMTTF = 10 and b = 2, and the maintenance terms chosen are τ (3) = τ (4) = 2 andτ (3) = τ (4) = 5. The number of replications is based on the 95% confidence interval ofthe mean. If the half-width is ≤ 0.025, we stop replicating. We obtain the followingresults:

te: 2 5 10 20

Blockτ (3) = 2 Mean 0.9191 2.1211 4.6397 9.3171

τ (3) = 5 Mean 0.9243 5.5413 11.0500 22.1041

Ageτ (4) = 2 Mean 0.9510 2.0989 4.6619 9.3295

τ (4) = 5 Mean 0.9212 5.5234 11.1066 22.2425

As can be seen the number of corrective replacements becomes bigger for a biggervalue of τ (m). At the evaluation moment of 2 no difference is found, which is due tothe high mean time to failure of 10, compared to the evaluation moment of 2.

5. For scenarios 3 and 4, we choose a b > 1 (increasing failure rate) and τ (m) = 5. Wecheck whether the number of corrective replacements is higher for scenario 4.As can be seen in the table of verification step 4, at evaluation moment 10 a differencecan be seen in the mean number of corrective replacements. We see that the meannumber of corrective replacements under age replacement is indeed higher.

G.4 Determination of significance level and power

The significance level and power of the tests are determined by Monte Carlo simulation.We replicate the ‘true’ arrival process of the corrective replacements based on the lifetimedistribution and the maintenance rule applied as explained in section G.1. This simulationprovides an observed value of the number of corrective replacements for the numerical ex-ample. By repeating this simulation, we determine the significance level and power.

Page 94: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX G. MATLAB IMPLEMENTATION 84

The significance level is the probability of incorrectly rejecting H0. The significance levelis obtained by simulating the ‘true’ arrival of corrective replacements, with the lifetimeparameters being equal to the assumption about the failure behaviour under consideration.This provides an observed value. Subsequently, a subtest assesses whether this value liesoutside the bounds. The proportion that lies outside the bounds is expected to be aboutequal to the chosen α level.

Not only the significance level but also the power, 1 − β, is determined. The power is theprobability of finding a certain deviation, i.e. the probability of correctly rejecting H0 if weassume a certain true behaviour. The power is determined by simulating a ‘true’ arrivalprocess with parameters not being equal to the assumption under consideration. Thisprovides an observed value. Subsequently, a subtest assesses whether this value lies outsidethe bounds. The proportion that lies outside the bounds is 1− β. As the simulated arrivalprocess is different from the arrival process which is a result of the assumption about thefailure behaviour, an observed value is desired to lie outside the bounds as soon as possible.The desired power is typically 0.80.

G.5 Verification of Matlab implementation of hypothesistests

In this section, the correctness is determined of the determination in Matlab of the mean,variance, lower and/or upper bounds, significance level, and power values. To accomplishthis we use the fact that a Weibull distribution with shape parameter b = 1, is equal toan exponential distribution. The arrival process of corrective replacements of componentshaving an exponential distributed lifetime, follows a Poisson distribution. Thus, if shapeparameter b = 1:

1. The mean and variance should be equal for every scenario;2. The significance level and power should be similar to the probability that a Poisson

distributed value lies within the determined bounds.enterWe evaluate a numerical example of 30 trains with one single component starting run-ning services at t = 0. The lifetime of the component follows a Weibull distribution withMTTF = 10 and b = 1. The mean E[N (m)(te)], variance V ar[N (m)(te)], lower and/orupper bounds, significance level, and power values can be found in table G.1.

Table G.1 shows that the expected means and variances of the number of corrective replace-ments are very similar. As explained in section 4.5 we slightly underestimate the number ofcorrective replacements by discretising the Weibull distribution. This underestimation canbe seen in the values of the means and variances for scenarios 2 to 4.

The rounded lower and upper bounds for the correctness check and the risk check are shown.The lower bound is rounded towards the smallest integer greater than or equal to the calcu-lated bound, i.e. ceiling. The upper bound is rounded towards the largest integer less thanor equal to the calculated bound, i.e. floor. By this means, the rounding does not influencethe results compared to the unrounded bounds, and we do not have to show all bounds forthe different scenarios.

Page 95: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX G. MATLAB IMPLEMENTATION 85

For the correctness check, a lower and upper bound are calculated, in this section denotedby L and U respectively. For the risk check, only an upper bound is calculated. Theimplied significance level is calculated as follows: 1 − (FY (bUc) − FY (max(0, dLe − 1)),with Y ∼ Poisson(1/10 ∗ 30). In the same way the implied power values are calculated:1−

(FY (bUc)−FY (max(0, dLe − 1))

), with Y ∼ Poisson(1/8 ∗ 30). These values are com-

pared with values for the power determined by simulation. As can be seen, the Poissonvalue lies in the simulated 95% confidence intervals of the power. Note that we present therounded bounds, as these are the same for all scenarios based on the almost equal meanand variance.

Table G.1: Verification implementation Matlab (MTTF = 10, b = 1, α = 0.05)

Evaluation moment (te) 2 5 10 20E[N (1)(te)] V ar[N (1)(te)] E[N (2)(te)] V ar[N (2)(te)] E[N (3)(te)] V ar[N (3)(te)] E[N (4)(te)] V ar[N (4)(te)]

Scenario 1 6 6 15 15 30 30 60 60Scenario 2 5.9897 5.9878 14.9880 14.9840 29.9860 29.9770 59.9810 59.9630Scenario 3 5.9897 5.9878 14.9700 14.9650 29.9480 29.9390 59.8970 59.8780Scenario 4 5.9897 5.9878 14.9850 14.9754 29.9691 29.9396 59.9053 59.8466

Two sidedLower bound (round) 2 8 20 45Upper bound (round) 10 22 40 75

Significance level Poisson 0.0600 0.0508 0.0542 0.0449Power MTTF = 8 Poisson 0.1425 0.1921 0.3055 0.4694

Scenario 1 0.1431 ± 0.0012 0.1923 ± 0.0012 0.3053 ± 0.0012 0.4698 ± 0.0012Scenario 2 0.1429 ± 0.0012 0.1914 ± 0.0012 0.3037 ± 0.0012 0.4686 ± 0.0012Scenario 3 0.1428 ± 0.0012 0.1927 ± 0.0012 0.3049 ± 0.0012 0.4699 ± 0.0012Scenario 4 0.1416 ± 0.0012 0.1914 ± 0.0012 0.3050 ± 0.0012 0.4689 ± 0.0012Evaluation moment 2 5 10 15

Single sidedUpper bound (round) 10 21 39 72

Significance level Poisson 0.0451 0.0531 0.0648 0.0567Power MTTF = 8 Poisson 0.1383 0.2552 0.3629 0.6068

Scenario 1 0.1389 ± 0.0012 0.2556 ± 0.0012 0.3626 ± 0.0012 0.6070 ± 0.0012Scenario 2 0.1377 ± 0.0012 0.2542 ± 0.0012 0.3644 ± 0.0012 0.6071 ± 0.0012Scenario 3 0.1388 ± 0.0012 0.2534 ± 0.0012 0.3627 ± 0.0012 0.6076 ± 0.0012Scenario 4 0.1387 ± 0.0012 0.2548 ± 0.0012 0.3622 ± 0.0012 0.6064 ± 0.0012

Page 96: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

H

Performance tables

To present the performance of the hypothesis test, performance tables are constructed.These performance tables are given in this appendix. In subsection 2.4.4, an explanation oftheir structure is given.

H.1 Scenario 1

H.1.1 Definition of examples

Table H.1: Examples scenario 1

Component Batches Related performance tables

λ lc tc0 α = 0.05 α = 0.10

Example 3.1 110 c = 1 40 0

Table H.2 Table H.3c = 2 30 2

Example 3.2 110 c = 1 30 0 Table H.4 Table H.5

H.1.2 Example 3.1

Table H.2: Performance table Example 3.1 (λ = 1/10, α = 0.05)

Evaluation moment (te) 2 5 10 15 20E[N (1)(te)] 8 29 64 99 134V ar[N (1)(te)] 8 29 64 99 134Simulated mean 8.0001 ± 0.0410 28.9820 ± 0.0483 63.9670 ± 0.0832 98.9910 ± 0.0712 133.9800 ± 0.1132Simulated variance 8.0119 ± 0.1036 29.0680 ± 0.4320 64.1180 ± 0.7819 98.6300 ± 1.7071 134.3300 ± 1.8680Correctness checkLower bound 2.4564 18.4450 48.3200 79.4990 111.3100Upper bound 13.5440 39.5550 79.6800 118.5000 156.6900Realised significance level 0.0471 ± 0.0010 0.0507 ± 0.0022 0.0527 ± 0.0023 0.0499 ± 0.0023 0.0517 ± 0.0020Power λ = 1/5 0.7270 ± 0.0024 0.9946 ± 0.0010 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power λ = 1/8 0.1392 ± 0.0025 0.2901 ± 0.0025 0.5146 ± 0.0025 0.6767 ± 0.0024 0.8001 ± 0.0024Power λ = 1/12 0.0466 ± 0.0024 0.1224 ± 0.0025 0.2588 ± 0.0024 0.3777 ± 0.0025 0.5017 ± 0.0025Power λ = 1/15 0.1029 ± 0.0024 0.4395 ± 0.0024 0.8151 ± 0.0024 0.9476 ± 0.0024 0.9879 ± 0.0008Risk checkUpper bound 12.6520 37.8580 77.1590 115.3700 153.0400Realised significance level 0.0636 ± 0.0022 0.0604 ± 0.0025 0.0498 ± 0.0024 0.0534 ± 0.0016 0.0487 ± 0.0017Power λ = 1/5 0.8075 ± 0.0025 0.9978 ± 0.0003 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power λ = 1/8 0.2093 ± 0.0024 0.4085 ± 0.0025 0.6014 ± 0.0025 0.7688 ± 0.0024 0.8625 ± 0.0025

86

Page 97: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX H. PERFORMANCE TABLES 87

Table H.3: Performance table Example 3.1 (λ = 1/10, α = 0.10)

Evaluation moment (te) 2 5 10 15 20E[N (1)(te)] 8 29 64 99 134V ar[N (1)(te)] 8 29 64 99 134Simulated mean 7.9850 ± 0.0209 29.0440 ± 0.0491 64.0540 ± 0.0460 98.9230 ± 0.0716 133.9900 ± 0.1006Simulated variance 7.9420 ± 0.0597 29.0770 ± 0.2799 64.5430 ± 0.6123 98.3950 ± 1.1482 134.2300 ± 2.1017Correctness checkLower bound 3.3480 20.1420 50.8410 82.6340 0.0000 114.9600Upper bound 12.6520 37.8580 77.1590 115.3700 153.0400Realised significance level 0.1040 ± 0.0024 0.1140 ± 0.0025 0.0920 ± 0.0024 0.0950 ± 0.0024 0.0920 ± 0.0023Power λ = 1/5 0.8080 ± 0.0024 0.9980 ± 0.0004 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power λ = 1/8 0.2210 ± 0.0025 0.4095 ± 0.0024 0.6039 ± 0.0024 0.7704 ± 0.0025 0.8619 ± 0.0023Power λ = 1/12 0.1210 ± 0.0025 0.2368 ± 0.0025 0.3562 ± 0.0024 0.5083 ± 0.0025 0.6123 ± 0.0025Power λ = 1/15 0.2250 ± 0.0024 0.6182 ± 0.0025 0.8839 ± 0.0025 0.9754 ± 0.0017 0.9940 ± 0.0005Risk checkUpper bound 11.6250 35.9010 74.2520 111.7500 148.8400Realised significance level 0.1120 ± 0.0025 0.1150 ± 0.0024 0.0980 ± 0.0024 0.1040 ± 0.0025 0.1070 ± 0.0025Power λ = 1/5 0.8730 ± 0.0024 0.9992 ± 0.0004 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power λ = 1/8 0.3030 ± 0.0025 0.5379 ± 0.0025 0.7288 ± 0.0025 0.8649 ± 0.0025 0.9299 ± 0.0024

H.1.3 Example 3.2

Table H.4: Performance table Example 3.2 (λ = 1/10, α = 0.05)

Evaluation moment (te) 2 5 10 15 20E[N (1)(te)] 6 15 30 45 60V ar[N (1)(te)] 6 15 30 45 60Simulated mean 6.0059 ± 0.0271 14.9900 ± 0.0247 30.0090 ± 0.0406 45.0720 ± 0.0748 60.0030 ± 0.0744Simulated variance 6.0583 ± 0.0687 15.1320 ± 0.1866 29.6860 ± 0.3478 45.2440 ± 0.4922 59.5870 ± 0.6111Correctness checkLower bound 1.1991 7.4091 19.2650 31.8520 44.8180Upper bound 10.8010 22.5910 40.7350 58.1480 75.1820Realised significance level 0.0604 ± 0.0014 0.0516 ± 0.0023 0.0538 ± 0.0024 0.0443 ± 0.0017 0.0448 ± 0.0012Power λ = 1/5 0.6536 ± 0.0025 0.9193 ± 0.0018 0.9959 ± 0.0004 0.9998 ± 0.0001 1.0000 ± 0.0000Power λ = 1/8 0.1428 ± 0.0024 0.1940 ± 0.0024 0.3045 ± 0.0025 0.3731 ± 0.0025 0.4687 ± 0.0025Power λ = 1/12 0.0529 ± 0.0024 0.0732 ± 0.0018 0.1357 ± 0.0025 0.1646 ± 0.0025 0.2199 ± 0.0025Power λ = 1/15 0.0944 ± 0.0024 0.2189 ± 0.0024 0.4711 ± 0.0024 0.6206 ± 0.0021 0.7671 ± 0.0025Risk checkUpper bound 10.0290 21.3700 39.0090 56.0340 72.7410Realised significance level 0.0421 ± 0.0012 0.0552 ± 0.0022 0.0454 ± 0.0024 0.0468 ± 0.0019 0.0580 ± 0.0024Power λ = 1/5 0.6531 ± 0.0025 0.9458 ± 0.0025 0.9977 ± 0.0003 1.0000 ± 0.0001 1.0000 ± 0.0000Power λ = 1/8 0.1395 ± 0.0025 0.2559 ± 0.0024 0.3632 ± 0.0025 0.4787 ± 0.0025 0.6056 ± 0.0024

Table H.5: Performance table Example 3.2 (λ = 1/10, α = 0.10)

Evaluation moment (te) 2 5 10 15 20E[N (1)(te)] 6 15 30 45 60V ar[N (1)(te)] 6 15 30 45 60Simulated mean 5.9904 ± 0.0174 14.9980 ± 0.0349 30.0130 ± 0.0544 45.0030 ± 0.0538 59.9520 ± 0.0784Simulated variance 5.9413 ± 0.0785 15.0320 ± 0.1845 29.8690 ± 0.3324 44.3520 ± 0.5191 59.4160 ± 0.5705Correctness checkLower bound 1.9709 8.6295 20.9910 33.9660 47.2590Upper bound 10.0290 21.3700 39.0090 56.0340 72.7410Realised significance level 0.0587 ± 0.0024 0.0908 ± 0.0025 0.0814 ± 0.0023 0.0858 ± 0.0024 0.1043 ± 0.0017Power λ = 1/5 0.6534 ± 0.0025 0.9467 ± 0.0022 0.9975 ± 0.0004 0.9999 ± 0.0001 1.0000 ± 0.0000Power λ = 1/8 0.1424 ± 0.0025 0.2592 ± 0.0025 0.3641 ± 0.0024 0.4801 ± 0.0025 0.6072 ± 0.0025Power λ = 1/12 0.0555 ± 0.0023 0.1351 ± 0.0025 0.1900 ± 0.0024 0.2649 ± 0.0024 0.3689 ± 0.0024Power λ = 1/15 0.0952 ± 0.0023 0.3350 ± 0.0025 0.5568 ± 0.0024 0.7450 ± 0.0024 0.8813 ± 0.0021Risk checkUpper bound 9.1391 19.9630 37.0190 53.5970 69.9270Realised significance level 0.0847 ± 0.0025 0.1219 ± 0.0024 0.0882 ± 0.0024 0.1037 ± 0.0024 0.1111 ± 0.0025Power λ = 1/5 0.7559 ± 0.0025 0.9784 ± 0.0013 0.9989 ± 0.0004 1.0000 ± 0.0000 1.0000 ± 0.0000Power λ = 1/8 0.2216 ± 0.0025 0.4154 ± 0.0025 0.4902 ± 0.0024 0.6363 ± 0.0024 0.7325 ± 0.0020

Page 98: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX H. PERFORMANCE TABLES 88

H.2 Scenario 2

H.2.1 Definition of examples

Table H.6: Examples scenario 2

Component Batches Related performance tablesMTTF b lc tc0 α = 0.05 α = 0.10

Example 4.1 10 2 c = 1 40 0Table H.7 Table H.8

c = 2 30 2

Example 4.2 10 2 c = 1 30 0 Table H.9 Table H.10

H.2.2 Example 4.1

Table H.7: Performance table Example 4.1 (MTTF = 10, b = 2, α = 0.05)

Evaluation moment (te) 2 5 10 15 20E[N (2)(te)] 1.2431 9.4420 37.8854 73.2456 108.6410V ar[N (2)(te)] 1.2175 8.4778 25.7169 36.3332 44.6157Simulated mean 1.2411 ± 0.0104 9.4377 ± 0.0312 37.8637 ± 0.0473 73.2358 ± 0.0790 108.6619 ± 0.0690Simulated variance 1.2261 ± 0.0202 8.4910 ± 0.1218 25.8403 ± 0.3415 36.6416 ± 0.4856 44.6972 ± 0.7352Correctness checkLower bound -0.9195 0.0000 3.7353 0.0000 27.9461 0.0000 61.4315 0.0000 95.5494 0.0000Upper bound 3.4058 0.0000 15.1488 0.0000 47.8248 0.0000 85.0597 0.0000 121.7326 0.0000Realised significance level 0.0359 ± 0.0016 0.0377 ± 0.0014 0.0488 ± 0.0018 0.0472 ± 0.0021 0.0518 ± 0.0024Power MTTF = 5, b = 2 0.7201 ± 0.0025 0.9999 ± 0.0001 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1272 ± 0.0024 0.3586 ± 0.0025 0.8727 ± 0.0024 0.9776 ± 0.0008 0.9981 ± 0.0005Power MTTF = 12, b = 2 0.0109 ± 0.0007 0.0934 ± 0.0025 0.4784 ± 0.0025 0.8171 ± 0.0025 0.9288 ± 0.0023Power MTTF = 15, b = 2 0.0021 ± 0.0002 0.3683 ± 0.0025 0.9839 ± 0.0014 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 1 0.9588 ± 0.0012 0.9970 ± 0.0005 0.9827 ± 0.0011 0.9165 ± 0.0025 0.8586 ± 0.0024Power MTTF = 10, b = 3 0.0001 ± 0.0001 0.4288 ± 0.0025 0.2387 ± 0.0025 0.0289 ± 0.0010 0.0520 ± 0.0019Risk checkUpper bound 3.0581 14.2313 46.2268 83.1603 119.6278Realised significance level 0.0355 ± 0.0021 0.0480 ± 0.0017 0.0471 ± 0.0025 0.0440 ± 0.0024 0.0530 ± 0.0024Power MTTF = 5, b = 2 0.7194 ± 0.0025 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1279 ± 0.0024 0.4655 ± 0.0025 0.9073 ± 0.0024 0.9902 ± 0.0008 0.9994 ± 0.0002Power MTTF = 10, b = 1 0.9577 ± 0.0024 0.9984 ± 0.0004 0.9882 ± 0.0011 0.9423 ± 0.0013 0.8957 ± 0.0024Power MTTF = 10, b = 3 0.0001 ± 0.0001 0.0000 ± 0.0000 0.0001 ± 0.0001 0.0008 ± 0.0003 0.0007 ± 0.0003

Table H.8: Performance table Example 4.1 (MTTF = 10, b = 2, α = 0.10)

Evaluation moment (te) 2 5 10 15 20E[N (2)(te)] 1.2431 9.4420 37.8854 73.2456 108.6410V ar[N (2)(te)] 1.2175 8.4778 25.7169 36.3332 44.6157Simulated mean 1.2377 ± 0.0104 9.4528 ± 0.0271 37.9139 ± 0.0276 73.3039 ± 0.0472 108.6621 ± 0.0754Simulated variance 1.2314 ± 0.0197 8.4869 ± 0.1088 25.7380 ± 0.3188 37.1069 ± 0.4492 44.8326 ± 0.6469Correctness checkLower bound -0.5718 4.6528 29.5441 63.3309 97.6542Upper bound 3.0581 14.2313 46.2268 83.1603 119.6278Realised significance level 0.0377 ± 0.0021 0.0819 ± 0.0023 0.0935 ± 0.0024 0.0997 ± 0.0024 0.0998 ± 0.0024Power MTTF = 5, b = 2 0.7200 ± 0.0024 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1261 ± 0.0024 0.4669 ± 0.0025 0.9069 ± 0.0024 0.9903 ± 0.0010 0.9994 ± 0.0003Power MTTF = 12, b = 2 0.0097 ± 0.0013 0.1964 ± 0.0025 0.6472 ± 0.0025 0.8927 ± 0.0024 0.9634 ± 0.0014Power MTTF = 15, b = 2 0.0020 ± 0.0005 0.5652 ± 0.0025 0.9952 ± 0.0005 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 1 0.9561 ± 0.0020 0.9983 ± 0.0004 0.9894 ± 0.0008 0.9417 ± 0.0025 0.8964 ± 0.0025Power MTTF = 10, b = 3 0.0001 ± 0.0001 0.6290 ± 0.0025 0.4025 ± 0.0025 0.0791 ± 0.0023 0.1146 ± 0.0025Risk checkUpper bound 2.6572 13.1735 44.3844 80.9704 117.2011Realised significance level 0.1284 ± 0.0025 0.0865 ± 0.0024 0.0987 ± 0.0025 0.1131 ± 0.0024 0.0959 ± 0.0025Power MTTF = 5, b = 2 0.8695 ± 0.0024 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.3039 ± 0.0024 0.5820 ± 0.0024 0.9544 ± 0.0019 0.9976 ± 0.0005 0.9999 ± 0.0001Power MTTF = 10, b = 1 0.9865 ± 0.0016 0.9992 ± 0.0002 0.9942 ± 0.0006 0.9719 ± 0.0019 0.9246 ± 0.0022Power MTTF = 10, b = 3 0.0019 ± 0.0005 0.0000 ± 0.0001 0.0010 ± 0.0005 0.0073 ± 0.0009 0.0025 ± 0.0006

Page 99: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX H. PERFORMANCE TABLES 89

H.2.3 Example 4.2

Table H.9: Performance table Example 4.2 (MTTF = 10, b = 2, α = 0.05)

Evaluation moment (te) 2 5 10 15 20E[N (2)(te)] 0.9324 5.5281 18.7210 34.0230 49.1350V ar[N (2)(te)] 0.9132 4.8754 12.1050 16.1650 19.7790Simulated mean 0.9382 ± 0.0063 5.5195 ± 0.0270 18.7370 ± 0.0498 34.0220 ± 0.0358 49.1640 ± 0.0496Simulated variance 0.9187 ± 0.0161 4.9134 ± 0.0527 12.0980 ± 0.2160 16.0660 ± 0.1976 19.7970 ± 0.2440Correctness checkLower bound -0.9406 0.0000 1.2004 0.0000 11.9020 0.0000 26.1430 0.0000 40.4180 0.0000Upper bound 2.8053 0.0000 9.8558 0.0000 25.5400 0.0000 41.9030 0.0000 57.8520 0.0000Realised significance level 0.0677 ± 0.0024 0.0654 ± 0.0016 0.0431 ± 0.0022 0.0615 ± 0.0024 0.0549 ± 0.0018Power MTTF = 5, b = 2 0.7129 ± 0.0025 0.9974 ± 0.0005 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1735 ± 0.0024 0.3190 ± 0.0024 0.5835 ± 0.0025 0.8131 ± 0.0024 0.9133 ± 0.0017Power MTTF = 12, b = 2 0.0275 ± 0.0018 0.0948 ± 0.0024 0.2333 ± 0.0024 0.5217 ± 0.0024 0.6370 ± 0.0025Power MTTF = 15, b = 2 0.0086 ± 0.0007 0.2691 ± 0.0025 0.7828 ± 0.0025 0.9857 ± 0.0011 0.9983 ± 0.0005Power MTTF = 10, b = 1 0.9379 ± 0.0024 0.9299 ± 0.0023 0.7897 ± 0.0025 0.6935 ± 0.0025 0.6220 ± 0.0024Power MTTF = 10, b = 3 0.0006 ± 0.0003 0.2654 ± 0.0025 0.0660 ± 0.0025 0.0157 ± 0.0014 0.0276 ± 0.0009Risk checkUpper bound 2.5042 0.0000 9.1600 0.0000 24.4440 0.0000 40.6360 0.0000 56.4500 0.0000Realised significance level 0.0674 ± 0.0023 0.0436 ± 0.0023 0.0512 ± 0.0015 0.0551 ± 0.0023 0.0503 ± 0.0024Power MTTF = 5, b = 2 0.7131 ± 0.0025 0.9971 ± 0.0006 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1751 ± 0.0024 0.3185 ± 0.0025 0.6831 ± 0.0025 0.8704 ± 0.0023 0.9431 ± 0.0023Power MTTF = 10, b = 1 0.9378 ± 0.0017 0.9295 ± 0.0023 0.8442 ± 0.0024 0.7444 ± 0.0024 0.6665 ± 0.0025Power MTTF = 10, b = 3 0.0007 ± 0.0002 0.0001 ± 0.0001 0.0017 ± 0.0004 0.0033 ± 0.0006 0.0022 ± 0.0004

Table H.10: Performance table Example 4.2 (MTTF = 10, b = 2, α = 0.10)

Evaluation moment (te) 2 5 10 15 20E[N (2)(te)] 0.9323 5.5281 18.7207 34.0231 49.1350V ar[N (2)(te)] 0.9131 4.8754 12.1048 16.1646 19.7788Simulated mean 0.9275 ± 0.0049 5.5161 ± 0.0225 18.7080 ± 0.0270 33.9872 ± 0.0490 49.1238 ± 0.0457Simulated variance 0.9059 ± 0.0145 4.8745 ± 0.0661 12.0800 ± 0.1327 16.1032 ± 0.2199 19.7776 ± 0.2388Correctness checkLower bound -0.6395 1.8962 12.9980 27.4100 41.8198Upper bound 2.5042 9.1600 24.4435 40.6363 56.4503Realised significance level 0.0653 ± 0.0025 0.0638 ± 0.0023 0.0833 ± 0.0022 0.1040 ± 0.0024 0.0904 ± 0.0024Power MTTF = 5, b = 2 0.7109 ± 0.0025 0.9971 ± 0.0005 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1742 ± 0.0018 0.3210 ± 0.0025 0.6793 ± 0.0024 0.8699 ± 0.0020 0.9426 ± 0.0022Power MTTF = 12, b = 2 0.0283 ± 0.0017 0.0924 ± 0.0023 0.3448 ± 0.0025 0.6235 ± 0.0025 0.7237 ± 0.0024Power MTTF = 15, b = 2 0.0084 ± 0.0013 0.2687 ± 0.0024 0.8706 ± 0.0024 0.9926 ± 0.0011 0.9992 ± 0.0004Power MTTF = 10, b = 1 0.9394 ± 0.0024 0.9297 ± 0.0025 0.8440 ± 0.0025 0.7484 ± 0.0024 0.6721 ± 0.0025Power MTTF = 10, b = 3 0.0007 ± 0.0003 0.2608 ± 0.0025 0.1267 ± 0.0025 0.0383 ± 0.0013 0.0547 ± 0.0020Risk checkUpper bound 2.1570 8.3578 23.1795 39.1756 54.8345Realised significance level 0.0670 ± 0.0013 0.0936 ± 0.0024 0.0858 ± 0.0015 0.0872 ± 0.0024 0.1148 ± 0.0023Power MTTF = 5, b = 2 0.7174 ± 0.0022 0.9993 ± 0.0004 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1784 ± 0.0024 0.4607 ± 0.0024 0.7750 ± 0.0025 0.9170 ± 0.0024 0.9786 ± 0.0020Power MTTF = 10, b = 1 0.9382 ± 0.0019 0.9633 ± 0.0020 0.8859 ± 0.0024 0.7912 ± 0.0024 0.7581 ± 0.0024Power MTTF = 10, b = 3 0.0007 ± 0.0002 0.0007 ± 0.0004 0.0057 ± 0.0008 0.0081 ± 0.0009 0.0109 ± 0.0009

Page 100: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX H. PERFORMANCE TABLES 90

H.3 Scenario 3

H.3.1 Definition of examples

Table H.11: Examples scenario 3

Component Batches Related performance tables

MTTF b τ (3) lc tc0 α = 0.05 α = 0.10

Example 5.1 10 2 5 c = 1 40 0 Table H.12 Table H.13c = 2 30 2

Example 5.2 10 2 2 c = 1 30 0 Table H.14 Table H.15

H.3.2 Example 5.1

Table H.12: Performance table Example 5.1 (MTTF = 10, b = 2, τ (3) = 5, α = 0.05)

Evaluation moment (te) 2 5 10 15 20E[N (3)(te)] 1.2431 9.4420 22.3410 35.2400 48.1390V ar[N (3)(te)] 1.2175 8.4778 19.8540 31.2300 42.6060Simulated mean 1.2411 ± 0.0104 9.4417 ± 0.0284 22.3690 ± 0.0488 35.2870 ± 0.0233 48.0710 ± 0.0682Simulated variance 1.2261 ± 0.0202 8.3527 ± 0.1134 19.8670 ± 0.2921 31.5140 ± 0.3408 42.6320 ± 0.6968Correctness checkLower bound -0.9195 3.7353 13.6080 24.2870 35.3450Upper bound 3.4058 15.1490 31.0740 46.1930 60.9320Realised significance level 0.0359 ± 0.0016 0.0353 ± 0.0014 0.0430 ± 0.0024 0.0491 ± 0.0024 0.0551 ± 0.0019Power MTTF = 5, b = 2 0.7214 ± 0.0025 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1281 ± 0.0024 0.3564 ± 0.0024 0.6601 ± 0.0025 0.8471 ± 0.0025 0.9478 ± 0.0024Power MTTF = 12, b = 2 0.0111 ± 0.0009 0.0945 ± 0.0025 0.2820 ± 0.0024 0.4758 ± 0.0025 0.6123 ± 0.0024Power MTTF = 15, b = 2 0.0025 ± 0.0005 0.3682 ± 0.0025 0.8507 ± 0.0024 0.9778 ± 0.0014 0.9969 ± 0.0005Power MTTF = 10, b = 1 0.9583 ± 0.0023 0.9971 ± 0.0006 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0001 ± 0.0001 0.4287 ± 0.0025 0.8753 ± 0.0023 0.9822 ± 0.0009 0.9978 ± 0.0005Risk checkUpper bound 3.0581 14.2310 29.6700 44.4320 58.8750Realised significance level 0.0370 ± 0.0015 0.0483 ± 0.0021 0.0595 ± 0.0021 0.0501 ± 0.0021 0.0595 ± 0.0024Power MTTF = 5, b = 2 0.7222 ± 0.0024 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1306 ± 0.0024 0.4681 ± 0.0025 0.7904 ± 0.0024 0.9096 ± 0.0018 0.9689 ± 0.0016Power MTTF = 10, b = 1 0.9560 ± 0.0017 0.9983 ± 0.0003 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0001 ± 0.0001 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000

Table H.13: Performance table Example 5.1 (MTTF = 10, b = 2, τ (3) = 5, α = 0.10)

Evaluation moment (te) 2 5 10 15 20E[N (3)(te)] 1.2431 9.4420 22.3409 35.2398 48.1387V ar[N (3)(te)] 1.2175 8.4778 19.8538 31.2298 42.6058Simulated mean 1.2415 ± 0.0084 9.4606 ± 0.0293 22.3477 ± 0.0363 35.2403 ± 0.0450 48.1819 ± 0.0637Simulated variance 1.2234 ± 0.0297 8.5162 ± 0.1328 19.8691 ± 0.2009 31.3314 ± 0.3417 42.3154 ± 0.4683Correctness checkLower bound -0.5718 4.6528 15.0118 26.0477 37.4022Upper bound 3.0581 14.2313 29.6700 44.4318 58.8751Realised significance level 0.0363 ± 0.0025 0.0828 ± 0.0024 0.1163 ± 0.0020 0.1064 ± 0.0024 0.1054 ± 0.0025Power MTTF = 5, b = 2 0.7214 ± 0.0024 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1266 ± 0.0024 0.4671 ± 0.0025 0.7910 ± 0.0025 0.9087 ± 0.0024 0.9691 ± 0.0022Power MTTF = 12, b = 2 0.0110 ± 0.0007 0.1974 ± 0.0025 0.4826 ± 0.0024 0.6414 ± 0.0025 0.7366 ± 0.0025Power MTTF = 15, b = 2 0.0021 ± 0.0004 0.5645 ± 0.0025 0.9445 ± 0.0023 0.9930 ± 0.0011 0.9989 ± 0.0003Power MTTF = 10, b = 1 0.9571 ± 0.0018 0.9984 ± 0.0002 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0001 ± 0.0001 0.6309 ± 0.0025 0.9582 ± 0.0023 0.9941 ± 0.0008 0.9993 ± 0.0003Risk checkUpper bound 2.6572 13.1735 28.0512 42.4016 56.5038Realised significance level 0.1300 ± 0.0024 0.0890 ± 0.0024 0.0877 ± 0.0024 0.0990 ± 0.0025 0.1010 ± 0.0023Power MTTF = 5, b = 2 0.8690 ± 0.0025 1.0000 ± 0.0001 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.3054 ± 0.0025 0.5826 ± 0.0025 0.8423 ± 0.0024 0.9513 ± 0.0017 0.9837 ± 0.0015Power MTTF = 10, b = 1 0.9860 ± 0.0007 0.9994 ± 0.0003 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0015 ± 0.0005 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000

Page 101: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX H. PERFORMANCE TABLES 91

H.3.3 Example 5.2

Table H.14: Performance table Example 5.2 (MTTF = 10, b = 2, τ (3) = 2, α = 0.05)

Evaluation moment (te) 2 5 10 15 20E[N (3)(te)] 0.9324 2.0995 4.6617 6.7613 9.3235V ar[N (3)(te)] 0.9132 2.0599 4.5657 6.6256 9.1315Simulated mean 0.9399 ± 0.0091 2.0984 ± 0.0161 4.6442 ± 0.0162 6.7776 ± 0.0182 9.3208 ± 0.0461Simulated variance 0.9195 ± 0.0172 2.0776 ± 0.0267 4.5543 ± 0.0571 6.6255 ± 0.0737 9.1576 ± 0.1470Correctness checkLower bound -0.9406 0.0000 -0.7135 0.0000 0.4738 0.0000 1.7163 0.0000 3.4008 0.0000Upper bound 2.8053 0.0000 4.9125 0.0000 8.8497 0.0000 11.8060 0.0000 15.2460 0.0000Realised significance level 0.0673 ± 0.0025 0.0609 ± 0.0022 0.0547 ± 0.0018 0.0517 ± 0.0020 0.0437 ± 0.0023Power MTTF = 5, b = 2 0.7122 ± 0.0025 0.9176 ± 0.0015 0.9948 ± 0.0006 0.9997 ± 0.0002 1.0000 ± 0.0001Power MTTF = 8, b = 2 0.1764 ± 0.0018 0.2280 ± 0.0024 0.3023 ± 0.0024 0.3618 ± 0.0025 0.3810 ± 0.0025Power MTTF = 12, b = 2 0.0269 ± 0.0016 0.0155 ± 0.0017 0.0438 ± 0.0020 0.0532 ± 0.0025 0.1109 ± 0.0023Power MTTF = 15, b = 2 0.0088 ± 0.0009 0.0027 ± 0.0006 0.1259 ± 0.0024 0.1971 ± 0.0024 0.3994 ± 0.0025Power MTTF = 10, b = 1 0.9396 ± 0.0014 0.9994 ± 0.0002 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0008 ± 0.0004 0.0000 ± 0.0000 0.4271 ± 0.0025 0.6566 ± 0.0025 0.9075 ± 0.0024Risk checkUpper bound 2.5042 0.0000 4.4603 0.0000 8.1764 0.0000 10.9950 0.0000 14.2940 0.0000Realised significance level 0.0664 ± 0.0025 0.0599 ± 0.0024 0.0484 ± 0.0019 0.0800 ± 0.0024 0.0518 ± 0.0018Power MTTF = 5, b = 2 0.7114 ± 0.0025 0.9167 ± 0.0024 0.9945 ± 0.0004 0.9999 ± 0.0001 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1776 ± 0.0024 0.2293 ± 0.0024 0.3020 ± 0.0024 0.4836 ± 0.0025 0.4811 ± 0.0025Power MTTF = 10, b = 1 0.9367 ± 0.0023 0.9993 ± 0.0002 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0008 ± 0.0002 0.0001 ± 0.0001 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000

Table H.15: Performance table Example 5.2 (MTTF = 10, b = 2, τ (3) = 2, α = 0.10)

Evaluation moment (te) 2 5 10 15 20E[N (3)(te)] 0.9323 2.0995 4.6617 6.7613 9.3235V ar[N (3)(te)] 0.9131 2.0599 4.5657 6.6256 9.1315Simulated mean 0.9312 ± 0.0067 2.0912 ± 0.0147 4.6514 ± 0.0205 6.7693 ± 0.0156 9.3324 ± 0.0276Simulated variance 0.9034 ± 0.0129 2.0759 ± 0.0251 4.5288 ± 0.0733 6.6232 ± 0.0736 9.1184 ± 0.1182Correctness checkLower bound -0.6395 0.0000 -0.2612 0.0000 1.1471 0.0000 2.5274 0.0000 4.3530 0.0000Upper bound 2.5042 0.0000 4.4603 0.0000 8.1764 0.0000 10.9952 0.0000 14.2940 0.0000Realised significance level 0.0651 ± 0.0015 0.0602 ± 0.0025 0.0959 ± 0.0025 0.1139 ± 0.0024 0.0935 ± 0.0024Power MTTF = 5, b = 2 0.7129 ± 0.0025 0.9185 ± 0.0024 0.9944 ± 0.0010 0.9998 ± 0.0001 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1769 ± 0.0025 0.2272 ± 0.0025 0.3067 ± 0.0024 0.4821 ± 0.0023 0.4832 ± 0.0025Power MTTF = 12, b = 2 0.0269 ± 0.0014 0.0163 ± 0.0015 0.1676 ± 0.0024 0.1602 ± 0.0025 0.2246 ± 0.0025Power MTTF = 15, b = 2 0.0087 ± 0.0009 0.0026 ± 0.0004 0.3810 ± 0.0025 0.4194 ± 0.0025 0.5943 ± 0.0025Power MTTF = 10, b = 1 0.9389 ± 0.0021 0.9991 ± 0.0004 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0006 ± 0.0001 0.0000 ± 0.0001 0.7913 ± 0.0025 0.8775 ± 0.0023 0.9688 ± 0.0024Risk checkUpper bound 2.1570 0.0000 3.9388 0.0000 7.4001 0.0000 10.0600 0.0000 13.1961 0.0000Realised significance level 0.0650 ± 0.0025 0.1584 ± 0.0025 0.0999 ± 0.0024 0.0813 ± 0.0025 0.0895 ± 0.0024Power MTTF = 5, b = 2 0.7136 ± 0.0025 0.9677 ± 0.0018 0.9977 ± 0.0005 0.9998 ± 0.0002 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1777 ± 0.0024 0.4122 ± 0.0025 0.4395 ± 0.0025 0.4808 ± 0.0024 0.5900 ± 0.0024Power MTTF = 10, b = 1 0.9374 ± 0.0023 0.9997 ± 0.0001 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0005 ± 0.0002 0.0006 ± 0.0002 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000

Page 102: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX H. PERFORMANCE TABLES 92

H.4 Scenario 4

H.4.1 Definition of examples

Table H.16: Examples scenario 4

Component Batches Related performance tables

MTTF b τ (4) lc tc0 α = 0.05 α = 0.10

Example 5.1 10 2 3 c = 1 40 0 Table H.17 Table H.18c = 2 30 2

Example 5.2 10 2 2 c = 1 30 0 Table H.19 Table H.20

H.4.2 Example 6.1

Table H.17: Performance table Example 6.1 (MTTF = 10, b = 2, τ (4) = 3, α = 0.05)

Evaluation moment (te) 2 5 10 15 20E[N (4)(te)] 1.2431 6.1237 13.8590 22.4660 30.5330V ar[N (4)(te)] 1.2175 5.9648 13.6090 21.7400 29.7230Simulated mean 1.2315 ± 0.0125 6.1171 ± 0.0257 13.8540 ± 0.0169 22.4930 ± 0.0528 30.5530 ± 0.0495Simulated variance 1.1926 ± 0.0279 5.9308 ± 0.0783 13.5720 ± 0.2310 21.8740 ± 0.3711 30.0480 ± 0.5866Correctness checkLower bound -0.9195 1.3369 6.6284 13.3270 19.8470Upper bound 3.4058 10.9110 21.0890 31.6040 41.2190Realised significance level 0.0343 ± 0.0023 0.0590 ± 0.0013 0.0401 ± 0.0016 0.0540 ± 0.0022 0.0444 ± 0.0020Power MTTF = 5, b = 2 0.7192 ± 0.0025 0.9990 ± 0.0004 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1286 ± 0.0024 0.3531 ± 0.0024 0.4907 ± 0.0025 0.7101 ± 0.0024 0.8046 ± 0.0024Power MTTF = 12, b = 2 0.0113 ± 0.0009 0.0770 ± 0.0021 0.1537 ± 0.0025 0.2972 ± 0.0025 0.3618 ± 0.0025Power MTTF = 15, b = 2 0.0025 ± 0.0007 0.2408 ± 0.0025 0.5768 ± 0.0025 0.8600 ± 0.0025 0.9373 ± 0.0021Power MTTF = 10, b = 1 0.9595 ± 0.0021 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0001 ± 0.0001 0.5332 ± 0.0025 0.9224 ± 0.0022 0.9961 ± 0.0004 0.9996 ± 0.0003Risk checkUpper bound 3.0581 10.1410 19.9270 30.1350 39.5010Realised significance level 0.0360 ± 0.0017 0.0472 ± 0.0021 0.0690 ± 0.0024 0.0488 ± 0.0023 0.0542 ± 0.0020Power MTTF = 5, b = 2 0.7219 ± 0.0025 0.9993 ± 0.0003 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1278 ± 0.0025 0.3531 ± 0.0025 0.6619 ± 0.0025 0.7693 ± 0.0024 0.8822 ± 0.0024Power MTTF = 10, b = 1 0.9582 ± 0.0015 0.9999 ± 0.0001 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0001 ± 0.0001 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000

Table H.18: Performance table Example 6.1 (MTTF = 10, b = 2, τ (4) = 3, α = 0.10)

Evaluation moment (te) 2 5 10 15 20E[N (4)(te)] 1.2431 6.1237 13.8590 22.4660 30.5330V ar[N (4)(te)] 1.2175 5.9648 13.6090 21.7400 29.7230Simulated mean 1.2450 ± 0.0055 6.1299 ± 0.0135 13.8480 ± 0.0207 22.4480 ± 0.0285 30.5520 ± 0.0412Simulated variance 1.2159 ± 0.0160 5.9302 ± 0.0633 13.4940 ± 0.1369 21.7700 ± 0.2533 29.8020 ± 0.2755Correctness checkLower bound -0.5718 2.1065 7.7908 14.7960 21.5650Upper bound 3.0581 10.1410 19.9270 30.1350 39.5010Realised significance level 0.0359 ± 0.0020 0.0991 ± 0.0025 0.1006 ± 0.0025 0.0842 ± 0.0024 0.0988 ± 0.0024Power MTTF = 5, b = 2 0.7225 ± 0.0024 0.9993 ± 0.0002 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1275 ± 0.0024 0.3578 ± 0.0025 0.6620 ± 0.0025 0.7672 ± 0.0025 0.8825 ± 0.0024Power MTTF = 12, b = 2 0.0101 ± 0.0013 0.2015 ± 0.0024 0.2530 ± 0.0024 0.3949 ± 0.0025 0.5316 ± 0.0025Power MTTF = 15, b = 2 0.0024 ± 0.0004 0.4837 ± 0.0025 0.7166 ± 0.0024 0.9137 ± 0.0021 0.9769 ± 0.0018Power MTTF = 10, b = 1 0.9581 ± 0.0020 0.9999 ± 0.0001 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0001 ± 0.0001 0.7906 ± 0.0021 0.9686 ± 0.0012 0.9986 ± 0.0004 1.0000 ± 0.0001Risk checkUpper bound 2.6572 9.2536 18.5870 28.4410 37.5200Realised significance level 0.1272 ± 0.0019 0.0905 ± 0.0024 0.1073 ± 0.0024 0.1024 ± 0.0025 0.1038 ± 0.0024Power MTTF = 5, b = 2 0.8699 ± 0.0025 0.9997 ± 0.0001 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.3044 ± 0.0025 0.4789 ± 0.0025 0.7404 ± 0.0024 0.8632 ± 0.0024 0.9323 ± 0.0024Power MTTF = 10, b = 1 0.9862 ± 0.0010 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0014 ± 0.0003 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000

Page 103: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX H. PERFORMANCE TABLES 93

H.4.3 Example 6.2

Table H.19: Performance table Example 6.2 (MTTF = 10, b = 2, τ (4) = 2, α = 0.10)

Evaluation moment (te) 2 5 10 15 20E[N (4)(te)] 0.9324 2.1150 4.6648 6.8140 9.3327V ar[N (4)(te)] 0.9132 2.1030 4.5739 6.7707 9.1622Simulated mean 0.9346 ± 0.0081 2.1184 ± 0.0134 4.6713 ± 0.0199 6.8241 ± 0.0277 9.3334 ± 0.0307Simulated variance 0.9097 ± 0.0098 2.1029 ± 0.0435 4.6166 ± 0.0425 6.8178 ± 0.0978 9.1540 ± 0.0588Correctness checkLower bound -0.9406 -0.7273 0.4731 1.7141 3.4001Upper bound 2.8053 4.9572 8.8566 11.9140 15.2650Realised significance level 0.0662 ± 0.0014 0.0633 ± 0.0024 0.0561 ± 0.0012 0.0550 ± 0.0022 0.0441 ± 0.0016Power MTTF = 5, b = 2 0.7122 ± 0.0025 0.9225 ± 0.0025 0.9952 ± 0.0006 0.9996 ± 0.0002 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1755 ± 0.0025 0.2354 ± 0.0024 0.3030 ± 0.0025 0.3761 ± 0.0025 0.3816 ± 0.0025Power MTTF = 12, b = 2 0.0265 ± 0.0018 0.0157 ± 0.0012 0.0428 ± 0.0023 0.0553 ± 0.0022 0.1093 ± 0.0023Power MTTF = 15, b = 2 0.0083 ± 0.0008 0.0029 ± 0.0008 0.1260 ± 0.0025 0.1926 ± 0.0025 0.3985 ± 0.0025Power MTTF = 10, b = 1 0.9376 ± 0.0020 0.9992 ± 0.0001 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0007 ± 0.0003 0.0000 ± 0.0001 0.4264 ± 0.0024 0.6567 ± 0.0025 0.9058 ± 0.0025Risk checkUpper bound 2.5042 4.5003 8.1826 11.0940 14.3120Realised significance level 0.0653 ± 0.0020 0.0631 ± 0.0024 0.0462 ± 0.0023 0.0438 ± 0.0022 0.0516 ± 0.0012Power MTTF = 5, b = 2 0.7132 ± 0.0025 0.9229 ± 0.0023 0.9946 ± 0.0008 0.9996 ± 0.0002 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1733 ± 0.0025 0.2374 ± 0.0024 0.3015 ± 0.0025 0.3746 ± 0.0025 0.4825 ± 0.0025Power MTTF = 10, b = 1 0.9382 ± 0.0021 0.9990 ± 0.0003 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0006 ± 0.0003 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000

Table H.20: Performance table Example 6.2 (MTTF = 10, b = 2, τ (4) = 2, α = 0.10)

Evaluation moment (te) 2 5 10 15 20E[N (4)(te)] 0.9324 2.1150 4.6648 6.8140 9.3327V ar[N (4)(te)] 0.9132 2.1030 4.5739 6.7707 9.1622Simulated mean 0.9370 ± 0.0136 2.1184 ± 0.0148 4.6826 ± 0.0110 6.8143 ± 0.0224 9.3222 ± 0.0167Simulated variance 0.9224 ± 0.0126 2.0974 ± 0.0251 4.5781 ± 0.0509 6.7984 ± 0.0814 9.1414 ± 0.1028Correctness checkLower bound -0.6395 -0.2703 1.1471 2.5340 4.3539Upper bound 2.5042 4.5003 8.1826 11.0940 14.3120Realised significance level 0.0675 ± 0.0023 0.0626 ± 0.0020 0.0987 ± 0.0024 0.0782 ± 0.0024 0.0941 ± 0.0024Power MTTF = 5, b = 2 0.7118 ± 0.0024 0.9230 ± 0.0024 0.9942 ± 0.0008 0.9997 ± 0.0002 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1735 ± 0.0025 0.2354 ± 0.0024 0.3066 ± 0.0024 0.3775 ± 0.0025 0.4848 ± 0.0025Power MTTF = 12, b = 2 0.0267 ± 0.0016 0.0163 ± 0.0015 0.1704 ± 0.0024 0.1524 ± 0.0024 0.2228 ± 0.0024Power MTTF = 15, b = 2 0.0081 ± 0.0006 0.0028 ± 0.0006 0.3821 ± 0.0024 0.4150 ± 0.0025 0.5962 ± 0.0025Power MTTF = 10, b = 1 0.9390 ± 0.0014 0.9990 ± 0.0003 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0007 ± 0.0004 0.0000 ± 0.0000 0.7897 ± 0.0024 0.8764 ± 0.0025 0.9712 ± 0.0010Risk checkUpper bound 2.1570 3.9734 7.4057 10.1490 13.2120Realised significance level 0.0669 ± 0.0023 0.1635 ± 0.0024 0.0982 ± 0.0025 0.0856 ± 0.0024 0.0916 ± 0.0025Power MTTF = 5, b = 2 0.7077 ± 0.0024 0.9682 ± 0.0019 0.9980 ± 0.0003 0.9999 ± 0.0001 1.0000 ± 0.0000Power MTTF = 8, b = 2 0.1762 ± 0.0024 0.4166 ± 0.0025 0.4391 ± 0.0024 0.4946 ± 0.0024 0.5912 ± 0.0025Power MTTF = 10, b = 1 0.9362 ± 0.0014 0.9998 ± 0.0001 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000Power MTTF = 10, b = 3 0.0008 ± 0.0002 0.0006 ± 0.0002 0.0000 ± 0.0000 0.0000 ± 0.0000 0.0000 ± 0.0000

Page 104: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

I

Case study background

In this appendix, background knowledge considering the case study is presented. This givesthe necessary input for chapter 7. Firstly, the determination of batches will be explained insection I.1. Secondly, the lower and upper bounds for the evaluation moments will be givenin section I.2. Thirdly, in section I.3 the way the tool is applied in this example is shown.

I.1 Batches

The modernisation of the SGM train series started in 2003. In September 2003 the firstmodernised train starts running services. From that moment, 90 trains were modernisedand started running services gradually. The 90 trains are separated in 6 batches of 15. Thisseparation is based on the moment of inflow of a train. Per batch the average moment ofinflow is determined. The calculation is presented below.

In table I.2 on the next page the moment of inflow per train is given. This table is sortedon ascending moment of inflow. Every moment of inflow is assigned a number. As 2003 ist = 0, January 2003 is month 0, February 2003 is month 1, upon number 79 for August2009. Based on this number, the average moment of inflow per batch is calculated. A batchis composed of 30 components under consideration, i.e. 15 trains. This gives the batcheswith their moments of inflow as indicated in table 7.1 in chapter 7.

I.2 Lower and upper bounds

The lower and upper bounds for the cumulative number of corrective replacements uponthe evaluation moments, are given in table I.1. These values are used to draw the graphin section 7.3. The (2) indicates the lower and upper bound for the correctness check, i.e.two-sided. The (1) indicates the upper bound for the risk check, i.e. single-sided.

94

Page 105: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX I. CASE STUDY BACKGROUND 95

Table I.1: Lower and upper bounds versus realised number of corrective replacements

(End of) year Time Expected Lower (2) Upper (2) Upper (1) Realised value2004 2 1.8886 0 4 4 22005 3 5.9577 2 10 9 22006 4 11.864 6 18 17 132007 5 18.732 11 27 25 202008 6 26.252 17 36 34 332009 7 36.124 24 47 46 862010 8 46.426 34 59 57 1352011 9 56.727 42 71 69 1752012 10 67.029 51 83 80 2342013 11 77.331 61 94 91 2832014 12 87.632 70 105 103 3142015 13 97.934 79 117 114 342

Table I.2: Inflow moments trainsID Train name Serial number Inflow moment Month Year ID Train name Serial number Inflow moment Month Year1 SGM 1 - III 2936 September-03 8 0.67 46 SGM 2 - III 2981 February-06 37 3.082 SGM 1 - III 2937 September-03 8 0.67 47 SGM 2 - III 2983 March-06 38 3.173 SGM 1 - III 2938 September-03 8 0.67 48 SGM 2 - III 2985 March-06 38 3.174 SGM 1 - III 2941 October-03 9 0.75 49 SGM 2 - III 2984 April-06 39 3.255 SGM 1 - III 2939 November-03 10 0.83 50 SGM 2 - III 2987 May-06 40 3.336 SGM 1 - III 2942 December-03 11 0.92 51 SGM 2 - III 2988 May-06 40 3.337 SGM 1 - III 2943 January-04 12 1.00 52 SGM 2 - III 2986 June-06 41 3.428 SGM 1 - III 2944 February-04 13 1.08 53 SGM 2 - III 2990 June-06 41 3.429 SGM 1 - III 2945 February-04 13 1.08 54 SGM 2 - III 2989 July-06 42 3.5010 SGM 1 - III 2947 February-04 13 1.08 55 SGM 2 - III 2995 August-06 43 3.5811 SGM 1 - III 2946 March-04 14 1.17 56 SGM 2 - III 2991 September-06 44 3.6712 SGM 1 - III 2948 May-04 16 1.33 57 SGM 2 - III 2992 September-06 44 3.6713 SGM 1 - III 2949 June-04 17 1.42 58 SGM 2 - III 2993 October-06 45 3.7514 SGM 1 - III 2950 June-04 17 1.42 59 SGM 2 - III 2994 October-06 45 3.7515 SGM 1 - III 2951 June-04 17 1.42 60 SGM 1 - II 2133 May-08 64 5.3316 SGM 1 - III 2953 August-04 19 1.58 61 SGM 1 - II 2141 May-08 64 5.3317 SGM 1 - III 2954 August-04 19 1.58 62 SGM 1 - II 2145 May-08 64 5.3318 SGM 1 - III 2952 September-04 20 1.67 63 SGM 1 - II 2143 June-08 65 5.4219 SGM 1 - III 2955 September-04 20 1.67 64 SGM 1 - II 2144 July-08 66 5.5020 SGM 1 - III 2956 October-04 21 1.75 65 SGM 1 - II 2137 August-08 67 5.5821 SGM 1 - III 2957 October-04 21 1.75 66 SGM 1 - III 2971 August-08 67 5.5822 SGM 1 - III 2958 November-04 22 1.83 67 SGM 1 - II 2134 September-08 68 5.6723 SGM 1 - III 2940 December-04 23 1.92 68 SGM 1 - II 2136 September-08 68 5.6724 SGM 1 - III 2959 December-04 23 1.92 69 SGM 1 - II 2138 September-08 68 5.6725 SGM 1 - III 2960 December-04 23 1.92 70 SGM 1 - II 2140 September-08 68 5.6726 SGM 1 - III 2962 January-05 24 2.00 71 SGM 1 - II 2131 October-08 69 5.7527 SGM 1 - III 2964 January-05 24 2.00 72 SGM 1 - II 2135 October-08 69 5.7528 SGM 1 - III 2961 February-05 25 2.08 73 SGM 1 - II 2132 November-08 70 5.8329 SGM 1 - III 2963 March-05 26 2.17 74 SGM 1 - II 2139 December-08 71 5.9230 SGM 1 - III 2967 March-05 26 2.17 75 SGM 1 - II 2142 December-08 71 5.9231 SGM 1 - III 2966 April-05 27 2.25 76 SGM 0 - II 2114 January-09 72 6.0032 SGM 1 - III 2965 May-05 28 2.33 77 SGM 0 - II 2115 February-09 73 6.0833 SGM 1 - III 2968 May-05 28 2.33 78 SGM 0 - II 2116 February-09 73 6.0834 SGM 1 - III 2969 June-05 29 2.42 79 SGM 0 - II 2117 February-09 73 6.0835 SGM 1 - III 2970 July-05 30 2.50 80 SGM 0 - II 2123 February-09 73 6.0836 SGM 1 - III 2972 July-05 30 2.50 81 SGM 0 - II 2111 March-09 74 6.1737 SGM 1 - III 2973 August-05 31 2.58 82 SGM 0 - II 2112 April-09 75 6.2538 SGM 1 - III 2974 September-05 32 2.67 83 SGM 0 - II 2119 April-09 75 6.2539 SGM 1 - III 2979 September-05 32 2.67 84 SGM 0 - II 2120 April-09 75 6.2540 SGM 1 - III 2975 October-05 33 2.75 85 SGM 0 - II 2121 April-09 75 6.2541 SGM 1 - III 2976 October-05 33 2.75 86 SGM 0 - II 2113 May-09 76 6.3342 SGM 1 - III 2977 November-05 34 2.83 87 SGM 0 - II 2122 May-09 76 6.3343 SGM 1 - III 2978 November-05 34 2.83 88 SGM 0 - II 2124 July-09 78 6.5044 SGM 1 - III 2980 January-06 36 3.00 89 SGM 0 - II 2125 July-09 78 6.5045 SGM 2 - III 2982 January-06 36 3.00 90 SGM 0 - II 2118 August-09 79 6.58

Page 106: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

APPENDIX I. CASE STUDY BACKGROUND 96

I.3 Application of in-depth analysis tool

In this section, the filled out sheet for this example is shown.

Figure I.1 – Screenshot filled out sheet

Page 107: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

Bibliography

[1] J. van Aken, J.J. Berends, and J.D. van der Beij. Problem solving in organizations: AMethodological Handbook for Business and Management students. 2012.

[2] J. van Aken, J.J. Berends, and J.D. van der Beij. Problem solving in organizations: AMethodological Handbook for Business students. 2007.

[3] J. Arts. Elementary Maintenance Models Lecture notes, School of Industrial Engineer-ing, Eindhoven University of Technology, 2014.

[4] J. Arts. Maintenance Modeling and Optimization. Lecture notes, School of IndustrialEngineering, Eindhoven University of Technology, 2015.

[5] W. Blischke, M. Karim, and D. Murthy. Warranty Data Collection and Analysis. 2011.

[6] C. A. Chung. Simulation Modeling Handbook A Practical Approach. 2004.

[7] A. Dasgupta and M. Pecht. Material failure mechanisms and damage models. IEEETransactions on Reliability, 40(5):531–536, 1991.

[8] L. A. M. Van Dongen. Maintenance Engineering: instandhouding van verbindingen.Inaugural lecture University of Twente, pages 1–60, 2011.

[9] C. E. Ebeling. An introduction to reliability and maintainability engineering. WavelandPress, Inc., Canada, second edition, 2010.

[10] J.Fan, K.C. Yung, and M.Pecht. Physics-of-Failure-Based Prognostics and HealthManagement for High-Power White Light-Emitting Diode Lighting. IEEE Transactionson Device and Materials Reliability, 11(3):407–416, 2011.

[11] M. Frisen. Surveillance. In International Encyclopedia of Statistical Science, pages1577–1579. 2011.

[12] P. H. Garthwaite, I. T. Jolliffe, and B. Jones. Statistical inference, 2002.

[13] M. S. Hamada, A. G. Wilson, C. S. Reese, and H. F. Martz. Bayesian Reliability.Springer Science, 2008.

[14] A. M. Law, W. D. Kelton. Simulation modeling and analysis. Mc-Graw-Hill, NewYork, third edition, 2000.

97

Page 108: Eindhoven University of Technology MASTER Monitoring … · Monitoring assumption validity based on eld observations of corrective replacements by Miriam Penders Eindhoven, March

BIBLIOGRAPHY 98

[15] D. M. Louit, R. Pascual, and A. K. S. Jardine. A practical procedure for the selection oftime-to-failure models based on the assessment of trends in maintenance data. ReliabilityEngineering and System Safety, 94(10):1618–1628, 2009.

[16] D. C. Montgomery and G. C. Runger. Applied Statistics and Probability for Engineers.2007.

[17] C. Papadimitriou, J. Beck, and L. Katafygiotis. Updating robust reliability usingstructural test data. Probabilistic Engineering Mechanics, 16(2):103–113, 2001.

[18] S. Ross. Introduction to Probability Models. pages 409–426. Elsevier Inc., 11th edition,2014.

[19] F. R. Sagasti and I. I. Mitroff. Operations research from the viewpoint of generalsystems theory. Omega, 1(6):695–709, 1973.

[20] K. Smit. Maintenance engineering and management. Delft Academic Press, Delft, firstedition, 2014.

[21] H. C. Tijms. A first course in stochastic models. pages 311–312. Wiley, Chichester,2003.

[22] T. Tinga. Application of physical failure models to enable usage and load based main-tenance. Reliability Engineering and System Safety, 95(10):1061–1075, 2010.

[23] N. T. Thomopoulos. Essentials of Monte Carlo Simulation. pages 20–24, 2013.