research article an efficient approach for real-time

14
Research Article An Efficient Approach for Real-Time Prediction of Rate of Penetration in Offshore Drilling Xian Shi, 1 Gang Liu, 1 Xiaoling Gong, 2 Jialin Zhang, 1 Jian Wang, 3 and Hongning Zhang 1 1 College of Petroleum Engineering, China University of Petroleum (Huadong), Qingdao, Shandong 266580, China 2 College of Information and Control Engineering, China University of Petroleum (Huadong), Qingdao, Shandong 266580, China 3 College of Science, China University of Petroleum (Huadong), Qingdao, Shandong 266580, China Correspondence should be addressed to Gang Liu; [email protected] Received 11 May 2016; Accepted 28 September 2016 Academic Editor: Cheng-Tang Wu Copyright © 2016 Xian Shi et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Predicting the rate of penetration (ROP) is critical for drilling optimization because maximization of ROP can greatly reduce expensive drilling costs. In this work, the typical extreme learning machine (ELM) and an efficient learning model, upper-layer- solution-aware (USA), have been used in ROP prediction. Because formation type, rock mechanical properties, hydraulics, bit type and properties (weight on the bit and rotary speed), and mud properties are the most important parameters that affect ROP, they have been considered to be the input parameters to predict ROP. e prediction model has been constructed using industrial reservoir data sets that are collected from an oil reservoir at the Bohai Bay, China. e prediction accuracy of the model has been evaluated and compared with the commonly used conventional artificial neural network (ANN). e results indicate that ANN, ELM, and USA models are all competent for ROP prediction, while both of the ELM and USA models have the advantage of faster learning speed and better generalization performance. e simulation results have shown a promising prospect for ELM and USA in the field of ROP prediction in new oil and gas exploration in general, as they outperform the ANN model. Meanwhile, this work provides drilling engineers with more choices for ROP prediction according to their computation and accuracy demand. 1. Introduction Drilling is an expensive and necessary operation for petro- leum and gas exploration. e ultimate aim in drilling oper- ations is to increase drilling speed with less cost while main- taining safety. However, because of the geological uncertainty and many uncontrolled operational factors, the optimization of drilling is still a large challenge for oil and gas industries [1, 2]. In offshore drilling, rate of penetration (ROP) is the key parameter to optimize total rig hours. Because it can be used for formation drillability estimation, reliable and fast prediction of rate of penetration (ROP) is highly desirable. In existing literature, some direct and indirect methods are designed to evaluate the ROP. ROP mathematical models can be used to outline changes of rock mechanical properties and drilling parameters, bit types, and design on ROP. Bour- goyne and Young (B&Y) proposed a ROP calculating model considering eight functions, whereas the relevant parameters need to be obtained by multiple regression for each data entry, and this equation is only adequate for roller-type rock bits [3]. In addition to B&Y’s equation, many authors try to find a sim- ple link between ROP and rock properties because the major- ity of reservoir mechanical properties can be inferred from well logs. Howarth et al. carried out a laboratory test program for correlating the penetration rate with rock properties and reported that penetration rate correlated well with the uniax- ial compressive and tensile strength of the rock [4]. Kahra- man introduced a strong correlation between penetration rate and brittleness values obtained from uniaxial compres- sive strength and tensile strength [5]. However, the applica- tion of rock mechanical properties for ROP estimation can- not reveal the real-time downhole conditions. erefore, it is necessary to use drilling data, but many operational parame- ters of the drilling equipment are found to relate to ROP. Some authors performed statistical analyses using real-time drilling data, such as weight bit on, bit rotational speed, torque, and the effect of mud properties, and some of them obtained Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2016, Article ID 3575380, 13 pages http://dx.doi.org/10.1155/2016/3575380

Upload: others

Post on 15-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Research ArticleAn Efficient Approach for Real-Time Prediction of Rate ofPenetration in Offshore Drilling

Xian Shi1 Gang Liu1 Xiaoling Gong2 Jialin Zhang1 JianWang3 and Hongning Zhang1

1College of Petroleum Engineering China University of Petroleum (Huadong) Qingdao Shandong 266580 China2College of Information and Control Engineering China University of Petroleum (Huadong) Qingdao Shandong 266580 China3College of Science China University of Petroleum (Huadong) Qingdao Shandong 266580 China

Correspondence should be addressed to Gang Liu lg196009091sinacom

Received 11 May 2016 Accepted 28 September 2016

Academic Editor Cheng-Tang Wu

Copyright copy 2016 Xian Shi et alThis is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Predicting the rate of penetration (ROP) is critical for drilling optimization because maximization of ROP can greatly reduceexpensive drilling costs In this work the typical extreme learning machine (ELM) and an efficient learning model upper-layer-solution-aware (USA) have been used in ROP prediction Because formation type rock mechanical properties hydraulics bittype and properties (weight on the bit and rotary speed) and mud properties are the most important parameters that affect ROPthey have been considered to be the input parameters to predict ROP The prediction model has been constructed using industrialreservoir data sets that are collected from an oil reservoir at the Bohai Bay China The prediction accuracy of the model has beenevaluated and compared with the commonly used conventional artificial neural network (ANN) The results indicate that ANNELM and USAmodels are all competent for ROP prediction while both of the ELM and USAmodels have the advantage of fasterlearning speed and better generalization performance The simulation results have shown a promising prospect for ELM and USAin the field of ROP prediction in new oil and gas exploration in general as they outperform the ANNmodel Meanwhile this workprovides drilling engineers with more choices for ROP prediction according to their computation and accuracy demand

1 Introduction

Drilling is an expensive and necessary operation for petro-leum and gas exploration The ultimate aim in drilling oper-ations is to increase drilling speed with less cost while main-taining safety However because of the geological uncertaintyand many uncontrolled operational factors the optimizationof drilling is still a large challenge for oil and gas industries[1 2] In offshore drilling rate of penetration (ROP) is thekey parameter to optimize total rig hours Because it can beused for formation drillability estimation reliable and fastprediction of rate of penetration (ROP) is highly desirable

In existing literature some direct and indirect methodsare designed to evaluate the ROP ROP mathematical modelscan be used to outline changes of rock mechanical propertiesand drilling parameters bit types and design on ROP Bour-goyne and Young (BampY) proposed a ROP calculating modelconsidering eight functions whereas the relevant parametersneed to be obtained bymultiple regression for each data entry

and this equation is only adequate for roller-type rock bits [3]In addition to BampYrsquos equationmany authors try to find a sim-ple link between ROP and rock properties because themajor-ity of reservoir mechanical properties can be inferred fromwell logs Howarth et al carried out a laboratory test programfor correlating the penetration rate with rock properties andreported that penetration rate correlated well with the uniax-ial compressive and tensile strength of the rock [4] Kahra-man introduced a strong correlation between penetrationrate and brittleness values obtained from uniaxial compres-sive strength and tensile strength [5] However the applica-tion of rock mechanical properties for ROP estimation can-not reveal the real-time downhole conditions Therefore it isnecessary to use drilling data but many operational parame-ters of the drilling equipment are found to relate toROP Someauthors performed statistical analyses using real-time drillingdata such as weight bit on bit rotational speed torque andthe effect of mud properties and some of them obtained

Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2016 Article ID 3575380 13 pageshttpdxdoiorg10115520163575380

2 Mathematical Problems in Engineering

Table 1 Summary of ROP prediction models with artificial intelligence

Reference Model Inputnumber Input layer Output layer

Zhang et al [22] AHP and BPANN 9 UCS bit size bit type drillability coefficient gross hours drilledWOB RPM drilling mud density and AV (Apparent Viscosity) ROP

Jahanbakhshiet al [13] ANN 20

Differential pressure hydraulics hole depth pump pressure densityof the overlying rock equivalent circulating density hole size

formation drillability permeability and porosity drilling fluid typeplastic viscosity of mud yield point of mud initial gel strength ofmud 10min Gel strength of mud bit type and its properties weight

on the bit and rotary speed bit wear and bit hydraulic power

ROP

Bahari andSeyed [11] Fuzzy 4 UCS rock quality designation bit Load and bit rotation ROP

Amar andIbrahim [25]

Radial-basisfunction and ELM 7 Depth bit weight rotary speed tooth wear Reynolds number

function ECD and pore gradient ROP

Bilgesu et al [12] ANN 9 Formation drillability formation abrasiveness bearing wear toothwear pump rate rotating time rotary torque WOB and rotary speed ROP

Bataee andMohseni [2] ANN 4 Bit diameter WOB RPM and mud weight ROP

Moran et al [26] ANN 6 Rock strength rock type abrasion WOB RPM and mud weight ROP and wear

Monazami et al[27] ANN 13

Drill collar outside diameter drill collar length kick of pointazimuth inclination angle weight on bit flow rate bit rotation speedmud weight solid percent plastic viscosity yield point and measured

depth

ROP

a certain degree of success [6ndash9] As a matter of fact thereis no exact relation between ROP rock mass propertiesand different drilling variables because not only do a largenumber of uncertain drilling variables influence ROPbut alsothe relationships between ROP and the affecting factors arecomplex and highly nonlinear Thus some soft computingtechnologies such as neural networks have been applied toROP prediction in recent years [10] The flexibility of thismethod allows engineers to analyze a wide range of infor-mation and deliver high-quality ROP prediction Bahari andSeyed applied GA to determine constant coefficients of Bour-goyne and Youngrsquos ROP model [11] Table 1 shows currentROPpredictionmodels with artificial intelligenceThe resultsrevealed that the ANN model exhibited better performanceto predict penetration rate than the prediction performanceof multiple regression models In addition neural networkscan yield good correlation coefficients even if fewer data areavailable from filed measurements [12ndash15] However tradi-tional ANN still has some significant shortcomings such as aslow training process and the possibility of trapping in localextrema which lead to dissatisfactory predictionsThereforeit is necessary to introduce new algorithms that can poten-tially further improve ROP prediction accuracy

The extreme learning machine (ELM) proposed byHuang et al is a fast algorithm for single hidden-layer feedfor-ward neural networks (SLFNs) [16 17] The way ELM trainsSLFN is that it first randomly generates theweights of the hid-den layer and then calculates the weights of the output layerby solving a linear system using the least square methodThislearning algorithm is extremely fast and has good predictionaccuracy It has been proved in theory and in practice that this

algorithm can generate good generalization performance inmost cases at speeds much faster than traditionally popularfeedforward neural network learning algorithms Until nowELM has been widely studied and applied extensively byresearchers and it has demonstrated good generalization andprediction performance inmany real-life applications [18 19]Many algorithms such as evolutionary ELM and enhancedrandom search-based incremental ELM (EI-ELM) have beenproposed to optimize the network structure The above algo-rithms choose all or part of the hidden-layer weights ran-domly and select the candidate ones with the LSE (LeastSquares Estimation) However themodel parameters in thesealgorithms are not efficient enough because only the valueof the objective function is used in the search process USA(upper-layer-solution-aware algorithm) an improved ver-sion of the ELM algorithm gives an optimization of thenumber of hidden-layer nodes and the parameters modelingthe problem [20] Once the weights of the hidden layer aredetermined those of the output layer can be determined asa certain function using the closed-form solution Thereforewhat we need to search for at each epoch along the gradientdirection is just the weights of the hidden layer However thestudy of ELM and USA in drilling ROP prediction is rare

This paper concentrates on ROP estimation using bit typeand its properties mud type and mud viscosity formationparameters such as rock strength formation drillability andformation abrasiveness and some critical drilling equipmentoperational parameters such as pump pressure WOB androtary speed based on the previous drilled wells data with theELM and USA model The developed ELM and USA modelare shown to be efficientwith respect to accuracy and running

Mathematical Problems in Engineering 3

time compared to traditionalANNmodelsThey thus providea more reliable and faster real-time tool for predicting ROPin new wells

2 Artificial Neural Networks andExtreme Learning Machines

21 Methodology of Artificial Neural Networks Artificialneural networks (ANNs) are efficient models in approximat-ing the unknown nonlinear functions seeking to simulatehuman brain behavior by processing data on a trial-and-errorbasis Because of their powerful ability in approximation andgeneralization ANNs have been widely used in petroleumengineering with applications to bit selection reservoircharacterization EOR (enhanced oil recovery) hydraulicfracturing candidate selection and so forth [21ndash24]

Thenetwork structures of ANNs aremade up of a numberof neurons which are distributed in layers based on theirdifferent functions Generally a complete neural networkconsists of three different types on layers namely an inputlayer one ormore hidden layers and an output layer in whicheach layer includes a preset number of neurons It has beenrigorously proved that ANN can approximate any continuousfunction with an arbitrary precision And ANN is thus a uni-versal approximator The direction of the information trans-mission in feedforward neural networks is from input neu-rons through activation of the hidden neurons to the outputsFor supervised learning the connecting weights of the layersare updated in the training procedure by minimizing theobjective function between the networksrsquo actual outputs andthe desired value Actually there are many different ways tofulfill the training of an ANN model back-propagation (BP)algorithm is themost popular one It is a typical type of super-vised learning strategy in machine learning field In generalthe BP method uses the following steps for a given specificinput sample the actual output is obtained through the infor-mation transferring on layers one by one If the error betweenthe produced and the desired outputs is acceptable then stoptraining If the error is not acceptable in the previous stepthen the weights are changed on the interconnections that gointo the output layer Next an error value is calculated for allof the neurons in the hidden layer that is just below the outputlayer Then the weights are adjusted for all interconnectionsthat go into the hidden layer The process is continued untilthe last layer of weights has been adjusted

Some essential shortcomings of BP method are continu-ally encountered in many real applications for example slowconvergence speed and prone to being stuck in a local min-imum Thus the standard BP algorithm sometimes showspoor performance on practical applications which mainlystems from employing the standard gradient descent methodto adjust the weights As a result there are many differentvariants that have emerged in recent decades The com-monly used variants based on different optimizing strategiesare with Levenberg-Marquardt (LM) the conjugate gradi-ent method with Powell-Beale Fletcher-Reeves and Polak-Ribiere updates the gradient descent method with momen-tum term penalty term and adaptive learning rate Bayesianregulation and scaled conjugate gradient [28]

Problem based optimization constraints

i L1

1 d

1 i L

middot middot middot

middot middot middotmiddot middot middot

xj

(ai bi)

Figure 1 Structure and schematic diagramof the ELMmethod usedin this paper

22 Methodology of Extreme Learning Machines

221 Conventional Extreme Learning Machines ELM wasoriginally proposed to train a single hidden-layer feedfor-ward neural network (SLFN) and later extended to moregeneralized SLFNs where the hidden layer may not bemade of homogeneous neurons Some obstacles that back-propagation neural networks faced such as the slow trainingspeed the sensitivity to the selection of the parameters andthe high possibility to be trapped in local minima can beavoided using ELM Figure 1 illustrates the structure of theELMmethod

Neural networks are used universally for feature extrac-tion clustering regression and classification and requirelittle human intervention in most cases Inspired by bio-logical learning characteristics to overcome the challengingproblems encountered by BP algorithms ELM developmentsets fit hidden-layer neurons while randomly choosing theparameters at the initialization period

The learning efficiency and effectiveness of ELM wereestablished in 2005 while its universal approximation capabi-lity was rigorously proved laterThe correspondence betweenspecific biological neural configurations consequently appea-red in the literature

Unlike other randomness-based training methodsnet-work models the hidden weights can be randomly deter-mined before the learning process and all of the hidden neu-rons are independent of the training samples and one anotherin ELM Once determined both the hidden neurons and thehidden weights need not be tuned although they are impor-tant and crucial for training Moreover ELM has good gener-alization ability as the architecture of ELM is robust enoughand has enough hidden neurons for the given problemswhich is not the case for conventional learning methodshighly dependent on the data

Because the connecting weights between the input andthe hidden layer do not need to be tuned we use a simple

4 Mathematical Problems in Engineering

model with one output node as an example and give theoutput of the ELM as follows

119891119871 (119909) = 119871sum119894=1

120573119894ℎ119894 (119909) = ℎ (119909) 120573 (1)

in which 120573 = [1205731 120573119871]119879 denotes the output weights con-necting the hidden layer and the output layer and ℎ(119909) =[ℎ1(119909) ℎ119871(119909)] is the hidden-layer output with respect toinput 119909 ℎ(119909) is a feature mapping In fact if the input is in119889-dimensional space ℎ(119909)maps it to 119871-dimensional hidden-layer feature space For binary classification applicationsusing ELM the function of the output is

119891119871 (119909) = sign [ℎ (119909) 120573] (2)

ELM often leads to a smaller training error with thesmaller norm of weights compared with the traditionallearning algorithmAccording to Bartlettrsquos theory the smallerweights make a great contribution to a better generalizationperformance of neural networks with similar training errorsWhat makes ELM particularly noteworthy is that the valuesof the weights need not be adjusted during the trainingThe values of the input weights are randomly chosen at thebeginning of the training procedure and stay fixed while theweights of the output layer are calculated by searching for theleast square solution of the following objective function

The training error and the norm of the output weightsELMminimization are as follows

Minimize 1003817100381710038171003817H120573 minus 1198791003817100381710038171003817210038171003817100381710038171205731003817100381710038171003817 (3)

whereH is the output matrix of the hidden layerThe main purpose of ELM is to minimize the training

error as well as the norm of connecting output layer weightsAccording to the theory of linear systems the weightsbetween the hidden layer and the output layer are calculatedusing the following equation [17]

120573 = Hdagger119879 (4)

where Hdagger is the Moore-Penrose pseudoinverse of matrix Hand 119879 = [1199051 1199052 119905119899]119879

We can summarize the ELM training algorithm as follows[17]

(1) Set the hidden node parameters randomly for exam-ple input weights 119886119894 and biases 119887119894 with the certainhidden nodes 119894 = 1 2 119871

(2) Calculate the hidden-layer output matrix H throughthe input and the weights as well as the biases

(3) Obtain the output weights matrix using (4)

The orthogonalizationmethod the iterativemethod singularvalue decomposition (SVD) and so forth can be usedto evaluate the Moore-Penrose pseudoinverse of a matrix[24] By the orthogonal projection method we can obtain

Hdagger = (H119879H)minus1H119879 when H119879H is nonsingular and Hdagger =H119879(HH119879)minus1 when HH119879 is nonsingular In light of ridgeregression theory we add a positive value to the diagonal ofH119879H orHH119879 to make the resultant solution more stable andtend to have better generalization performance

According to ELM theory there are many feature map-ping functions that ℎ(119909) may be designed to incorporatewhich gives ELM the ability to approximate any continuoustarget function The actual activation functions of humanbrain systems seem to be more like a nonlinear piecewisecontinuous stimulation though the actual functions remainunknown In fact the ELM model is based on two majortheorems universal approximation and classification ability

Theorem 1 (universal approximation theorem see [16 29])Given any bounded nonconstant piecewise continuous func-tion as the activation function in hidden neurons if by tuningparameters of the hidden neuron activation function SLFNscan approximate any target continuous function then for anycontinuous target function 119891(119909) and any randomly generatedfunction sequence ℎ119894(119909)119871119894=1 lim119871rarrinfinsum119871119894=1 120573119894ℎ119894(119909) minus 119891(119909) =0 holds with probability one given the appropriate outputweights 120573

According to the nonlinear activation function and piece-wise linear combination ELM can approximate any contin-uous mapping and divide arbitrary disjoint regions of anyshapes with enough randomly hidden neurons based onthe nonlinear piecewise continuous activation functions andtheir linear combinations In particular the theory basis forthe classification capability of ELM is as follows

Theorem2 (classification capability theorem see [17]) Givenfeature mapping ℎ(119909) if ℎ(119909) is dense in 119862(119877119889) or in 119862(119872)where 119872 is a compact set of 119877119889 then ELM with randomhidden-layer mapping ℎ(119909) can separate arbitrary disjointregions of any shapes in 119877119889 or in119872

This theorem implies that ELM with bounded noncon-stant piecewise continuous functions has universal approx-imation capability While it does not require updating theweights of hidden neurons in the training process it is worthmentioning that ELM ismore like the activation of the humanbrain with the randomly generating hidden weights withouttuning

222 Upper-Layer-Solution-Aware (USA) Algorithm ModelX = [x1 x119894 x119873] is given as the set of input vectorsand each vector is denoted by x119894 = [1199091119894 119909119895119894 119909119863119894] inwhich119863 is the dimension of the input vector and119873 is the totalsum of training samples Define 119871 as the number of hiddenunits and 119862 as the dimension of the output vector y119894 = U119879h119894is the output of the SHLNN Among them h119894 = 120590(W119879x119894) isdenoted as the hidden-layer output U is denoted as weight119871times119862matrix at the upper layerW is denoted as119863times119871 weightmatrix at the lower layer and 120590(sdot) is denoted as the kernelfunction

Mathematical Problems in Engineering 5

Note that if x119894 and h119894 are augmented with ones the biasterms are implicitly represented in the above formulations

T = [t1 t119894 t119873] is given as the target vectors andeach target is denoted by t119894 = [1199051119894 119905119895119894 119905119862119894]119879 and theparameters U andW are used to minimize the square error

119864 = Y minus T2 = Tr [(Y minus T) (Y minus T)119879] (5)

where Y = [1199101 119910119894 119910119873] Note that the hidden-layervaluesH = [h1 h119894 h119873] are also determined uniquelyif the lower-layer weights W are fixed After setting thegradient

120597119864120597W = 120597Tr [(U119879H minus T) (U119879H minus T)119879]120597U

= 2H (U119879H minus T)119879(6)

to zero the upper-layer weights U can be determined to theclosed-form solution [20]

U = (HH119879)minus1HT119879 (7)

Note that (7) defines an implicit constraint between the setof weights U and the set of weightsW through hidden-layeroutput H in the SHLNN This leads to a structure that ournew algorithms will use in optimizing the SHLNN

Regularization techniques need to be used in actualapplications when hidden-layer matrix H is sometimes ill-conditioned (ie HH119879 is singular) because solving (7) isrelatively simple The popular technique used in this study isbased on the ridge regression theory Even more specifically

by adding positive value I120583 to the diagonal of HH119879 (7) isconverted to

U = ( I120583 +HH119879)minus1HT119879 (8)

where I is the identity matrix and 120583 is a positive constant(regularization coefficient) that is used to control the degreeof regularization The final solution (8) actually minimizesU119879H minus T2 + 120583U2 in which 120583U2 is 1198712 regularizationterm Positive constant 120583 is the regularization coefficientIt represents the relative importance of complexity-penaltyterm U2 with respect to empirical risk U119879H minus T2 When120583 approaches zero the model tends to an unconstrainedoptimal problem and then the solutions would be completelydetermined from the training samples When 120583 is madeinfinitely large this means that the training samples are unre-liable imposing weight U to be sufficiently of small valuesIn real applications regularization parameter 120583 is set to bea value somewhere between these two cases The reasonableregularization parameter can efficiently affect both of thelearning and prediction performance (generalization ability)Solution (8) shows better stability and better generaliza-tion performance than (7) Whenever the pseudoinverse isinvolved solution (8) can be used throughout the paper

The principle of this algorithm is very simple Oncethe lower-layer weights are determined the upper-layerweights can be determined explicitly by using the closed-formsolution (8) Based on this solution at each epoch all weneed to adjust along the gradient direction are the lower-layerweights Gradient 120597119864120597W is obtained by considering upper-layer weights U under Wrsquos effect and the training objectivefunction is the square error We can obtain the gradient bytreating U as a function ofW and plugging (7) into criterion(5)

120597119864120597W = 120597Tr [(U119879H minus T) (U119879H minus T)119879]120597W = 120597Tr[([(HH119879)minus1HT119879]119879H minus T)([(HH119879)minus1HT119879]119879H minus T)119879]

120597W= 2X [H119879 ∘ (1 minusH)119879 ∘ [Hdagger (HT119879) (ΤΗdagger) minus T119879 (THdagger)]]

(9)

in which Hdagger = H119879(HH119879)minus1 is the pseudoinverse of H and ∘is the element-wise production

In the derivation of (9) we used the fact that HH119879 and(HH119879)minus1 are symmetric We also used the fact that

120597Tr [(HH119879)minus1HT119879TH119879]120597H119879

= minus2H119879 (HH119879)minus1HT119879TH119879 (HH119879)minus1+ 2T119879TH119879 (HH119879)minus1

(10)

This algorithm updates W by using the gradient that isdefined directly in (9) as

W119896+1 = W119896 + 120588 120597119864120597W (11)

where 120588 is the learning rate The learning rate here meansthe updating step which is widely used in solving optimalproblems It shows how far the weights need to be moved inthe direction of the given gradient 120597119864120597W and how to obtainthe newweightswhichminimizes the objective functionOnesimple way is to set a positive constant value in the entiretraining process In this paper we select the suitable learningrate by means of many different simulations This algorithm

6 Mathematical Problems in Engineering

uses the closed-form solution (8) to calculateU after updatingW

The algorithms proposed in this paper achieve signif-icantly better classification accuracy than ELM when thesame number of hidden units is used To achieve the sameclassification accuracy the algorithm requires only 116 of themodel and therefore less test time is needed

3 Input Data Selection

The target research four wells were drilled at the Bohai Baybasinwhich is the largest oil and gas production base offshorein China The oilfield has experienced many productionstages since 1963 and currently many projects are carriedout by China National Offshore Oil Corporation (CNOOC)and ConocoPhillips China (COPC) To effectively increaseproduction and decrease drilling cost drilling speed controlis necessary

Due to drilling requirements and the similarity of wellslocated close to one another collecting past data and utilizingthe data in a useful manner have an important impact ondrilling cost reductionThis means that optimum parametersare always in effect As a matter of fact there are variousfactors than can affect ROP Previous studies show that theROP is largely dependent on some critical parameters whichcan be classified into three types rigbit related parametersmud related parameters and formation parameters Therigbit and mud related parameters can be manipulated butthe formation parameters have to be handled differentlyTable 2 gives a brief classification of some important drillingparameters that affect ROP To predict and optimize theROP both controllable and uncontrollable parameters fromdrilling reports were collected from daily drilling reports labinvestigations well log and geological information

Because it is impossible to treat all of the relevant drillingparameters inROPprediction it is necessary to choose essen-tial data based on literature surveys and statistical analysis Itwas concluded that the rock properties the drilling machineparameters and mud properties relate to the ROP In fact atotal of 18 drilling parameters were gathered in this studydepth torque rotational per speed (RPM) weight on bitflow rates active PVT pump pressure hook load differentialpressure mud weight mud viscosity formation drillabilityformation abrasiveness UCS porosity bit size and bit type

(1) The bit typesize can affect ROP in a given for-mation Roller cone bits generally have three conesthat rotate around their axis while teeth are milledout of the matrix or inserted The teeth combinecrushing and shearing to fracture the rock Dragbits usually consist of a fixed cutter mechanism thatcan be cutting blades diamond stones or cuttersToday polycrystalline diamond compact (PDC) is themost commonly used drag bit with PDC diamondimpregnated or diamond hybrid cutters mountedon the bit bladesbody Drag bits disintegrate rockprimarily through shearing The difference in thedesign of roller cone and drag bits and differentrock disintegration methods requires different ROP

models Thus the effects of the bit type are includedin this model

(2) Operational parameters such as pump pressure anddifferential pressure are the most important oper-ational hydraulic parameters affecting ROP Pumppressure is sometimes increased to achieve a higherpenetration rate for a certain depth of advance whilekeeping rotation constant In addition the mechani-cal factors of weight on the bit and rotary speed havelinear relationships with ROP The increase in weighton the bit can push the bit teeth or cutters furtherdown into a formation and disintegrate more rocksresulting in faster ROP Bit rotation can be increasedto obtain higher penetration rate keeping the bit loadconstant

(3) Formation mechanical characteristics are uncontrol-lable variables for controlling ROP values In generaldecreasing the penetration rate is greatly attributedto increase of depth because by increasing the depthrock strength increases and porosity decreases More-over the larger indirect drilling resistance that resultsfrom larger UCS and hardness will also limit thedepth of cut for a given set of bit design and appliedoperating parametersThere are twomain reasons forselecting rock strength as the representative of rockproperties One is that the rock strength is calculatedby a function of depth density and porosity anotheris that this parameter can be computed by loggingdata or triaxial compressive experiments and scratchtests in a lab if core samples are available

(4) The aim of drilling mud is to remove the looserock chips away from the bit face while the bitscool Therefore drilling weight mud rheology andsolid content are confirmed as an important factorin ROP by some studies ROP basically decreasedby increasing mud viscosity solid content and mudweight

In addition to the above parameters some comprehensiveparameters such as formation drillability and abrasivenessalso need to be considered because these parameters canprovide some important information sources about ROPprediction that cannot be described by conventional drillingdata More importantly the model inputs should be easilyacquired in real time recorded from the respective measure-ment gauges on the rig Using additional parameters as inputscan result in a large network size and consequently slowdown running speed and efficiency Thus bit size bit typebit wear UCS formation drillability formation abrasivenessrotational per speed WOB pump pressure mud viscosityandmud density were selected as 11 inputs for neural networksimulation The data set used in this paper consists of morethan 5000 measurements taken from this area at differentwells and with different drilling rigs Figure 2 illustrates threetypes of PDC bits in the drilling operation in this area

It should be noted that data can very often be non-numeric Because neural networks cannot be trained withnonnumeric data it is necessary to translate nonnumeric

Mathematical Problems in Engineering 7

Table 2 ROP related drilling parameters classification

Rig and bit related parameters Formation parameters Drilling fluid propertiesWeight on bit (WOB) Local stresses Mud weightTorque Hardness ViscosityRotary speed (RPM) Mineralogy Filtrate lossFlow rates Porosity and permeability Solid contentPump stroke speed (SPM) Formation abrasiveness Gel strengthPump pressure Drillability Mud pHHook load Depth Yield pointBit wear TemperatureType of the bit Unconfined compressive strength (UCS)

Figure 2 Three types of PDC drilling bits

data into numeric data In this study formation drillabilityformation abrasiveness bit size and bit type are nonnu-meric Numbering classes are used in this study to performnonnumeric translation The formation drillability rangedbetween 30 and 75Thehighest number represents the highestdrillability and the lower drillability was for hard formationssuch as shales The formation abrasiveness ranged between1 and 8 where 8 signifies the highest abrasive formation Inaddition bit type is assigned between 1 and 12 where 12 isrelated to the bit type that has highest ROP average valueThe bit wear values are determined between 0 and 9 where9 indicates that a tooth is completely worn out Table 3 showsthe results for the recorded parameters and their ranges

4 Experimental Design

41 Model Architecture Determining the optimal size ofthe neural network model is complex An overly complexnetwork tends to memorize the data without learning togeneralize to new situations It concentrates too much on thedata presented for learning and tends toward modeling thenoise The total number of ROP in target wells is summedup to 5500 The training subset constitutes 75 of the totaldata and testing subsets that include 25 of the total dataAs mentioned above inputs of neural networks include 10

Table 3 Summary of results for the recorded parameters

Parameters RangesBit sizeinch 5813sim85Bit type 1ndash12Bit wear 0ndash9UCSpsi 2412sim21487Formation drillability 30ndash75Formation abrasiveness 1ndash8Rotational per speedrpm 84sim126WOB103 lbs 24sim69Drilling mud type 1sim3Mud viscositycp 1sim70Pump pressurepsi 1651ndash3765ROP 266sim1205parameters while the ROP is the only element in the outputlayer Figure 3 is the developed neural network architecture

42Model Performance Indicators Theperformances of neu-ral network models are assessed by means of the correction

8 Mathematical Problems in Engineering

Bit size

Bit type

UCS

Rate of penetration

Formation drillability

Pump pressure

Formation abrasiveness

Rotational per speed

WOB

Mud type

Mud viscosity

middotmiddotmiddot

middotmiddotmiddot

Figure 3 Schematic of architecture of the ELM network model Node number of the input layer is 10 and ROP are the only node at theoutput layer

coefficient (1198772) mean absolute error (MAE) root meansquare error (RMSE) and variance accounted for (VAF)performance indicators1198772 represents the proportion of the overall varianceexplained by the model which can be calculated as

1198772 = 1 minus sum119899119894=1 (119891 (119909119894) minus 119910119894)2sum119899119894=1 119891 (119909119894)2 minus sum119899119894=1 (119910119894)2 119899 (12)

These performance indicators can give a sense of howgood the performance of a predictive model is relative to theactual value

MSE can be described as

MSE = 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (13)

The RMSE is conventionally used as an error functionfor quality monitoring of the model Model performanceincreases as RMSE decreases RMSE can be calculated by thefollowing equation

RMSE = radic 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (14)

VAF is often used to evaluate the correctness of a modelby comparing the measured values with the estimated outputof the model VAF is computed by the following equation

VAF = (1 minus var [119891 (119909119894) minus 119910119894]var119891 (119909119894) ) times 100 (15)

where for (12)ndash(15)119910119894 is themeasured data and119891(119909119894)denotesthe predicted data 119909119894 119909119899 are the input parameters and 119899is the total data number

43 Model Parameters Selection As for the ANN ELM orUSA models the kernel function and the number of hiddennodes are critical parameters for neural network accuracy

The accuracy with different hidden layers was comparedin terms of RMSE performance indicators When the kernelfunction is determined then increase the hidden numbersincrementally until 100 is reached Figure 4(a) illustratesthe accuracy comparison with different nodes for the ANNELM and USA models respectively As for the ELM modelthe MSE between prediction results and measured valuesdecreases as the node number of the hidden layer graduallyincreases with more than 65 nodes As for the USA modelthe MSE between prediction results and measured valuesalso decreases as the node number of the hidden layergradually increases except for individual volatility points Asfor the ANNmodel the MSE between prediction results andmeasured values was observed in a fluctuant trend as thenode number of the hidden layer gradually increases whichindicates that the choice of hidden nodes can greatly affectfinal accuracy

Four types of kernels have been tested using a trainingdata set for the ELM model and they are the sigmoid func-tion radial-basis function hardlim function and triangularfunction Figure 4(b) shows the accuracy comparison withdifferent kernel functions for the ELM model Among thefour kernels for the ELM model the sigmoid-based modelcomes to the threshold first when the node number is set to40 while an overfitting problem appears as the node numberexceeds 60The hardlim-based triangular-based and radial-basis based models display a similar trend However itseems that the node number might exceed 60 when theminimum errors are close to the threshold for the hardlim-based model For the triangular-based model the RMSEreaches the lowest point at 311when the node number is setas 85 while the RMSE reaches the lowest point at 312 whenthe node number is set as 95 for radial-basis based modelsFigure 4(c) shows the accuracy comparison with differentkernel functions for the USAmodel Observe that the radial-basis basedmodel comes to the threshold first when the nodenumber is set as 65 As for the hardlim-based model it seemsthat the node number might exceed 60 when the minimum

Mathematical Problems in Engineering 9

ANN modelELM model

USA model

20 40 60 80 1000Number of hidden layers

0

5

10

15

20

25

30RM

SE

(a)

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

SigHardlim

RadbasTribas

(b)

SigHardlim

RadbasTribas

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

(c)

Figure 4 (a) The RMSE change with the increase of hidden layers (b) comparison of RMSE with different kernel functions for the ELMnetwork model (c) comparison of RMSE with different kernel functions for the USA network model

errors are close to the threshold which is similar to theELMmodelThe triangular-based and sigmoid-basedmodelsdisplay a similar trend the RMSE reaches the lowest pointat 311 when the node number is set as 80 for the sigmoidmodel while the RMSE reaches the lowest point at 348when the node number is set as 94 for the tribasis basedmodels

For comparison the classical ANN model was also usedin this paper and the sigmoid kernel is selected in theANN model The applied ANN architecture is a feedforwardneural network which is a network structure in whichthe information will propagate in one direction from input

to output A back-propagation algorithm with Levenberg-Marquardt training function has been used for trainingThis algorithm can approximate any nonlinear continuousfunction to an arbitrary accuracy Based on the experimentaltests and when the node number of the hidden layer is set as36 the ANN network model can obtain the best predictionaccuracy

5 Results and Discussion

When the parameters for the neural networkmodel are finallysettled the next step is to validate the model using a testing

10 Mathematical Problems in Engineering

Table 4 The calculated performance indicators for ANN ELM and USA model

Data set Performance indicator ANN ELM USA

Training

RMSE 151 095 078MAE 75 42 21VAF 9688 100 1001198772 091 093 098

Testing

RMSE 356 311 276MAE 127 97 765VAF 9188 9282 94521198772 090 092 094

data set In this section prediction performance is madebetween ANN ELM and USAmodels while training time isrecordedThemodel accuracy can be evaluated by comparingthe calculated RMSE VAF MAE and 1198772 performance indi-cators for training and testing data sets presented in Table 3Due to random subdivision processes the performances ofthe models for training and validation data sets are slightlydifferent As seen from Table 4 the prediction performancein terms of the predefined indices is better for the testing setthan for the training set Overall 1198772 values for the developedmodels were more than 095 at the training phase while inthe case of the conventional ANN at the testing phase 1198772was found to be 09 The high performance of the modelsfor the testing set can be considered to be an indicationof the good generalization capabilities of the models 1198772value greater than 09 indicates a very satisfactory modelperformance while 1198772 value between 08 and 09 indicatesan acceptable model Therefore all of the developed neuralnetworks are competent for ROP prediction 1198772 of 099 forboth the ELMandUSAmodels in the training phase indicateshigher accuracy compared to the ANN model (1198772 of 088)Moreover it also can be observed that the highest VAF of9452 the lowest RMSE of 276 andMAE of 765 belong to theUSAmodel indicating that theUSA basedmodel gives betterprediction performance than the other models In additionthe ELM model can also produce acceptable results whichyielded slightly fewer residual errors than the ANN modelsIn other words the deviation from the observed values ofpenetration rate predicted by ELM is less than the deviationof the ANNmodelrsquos prediction Figure 5 shows the regressionanalysis of the predicted and measured ROP by differentmodels in both training and testing phases According tothe running speed with the increase of hidden numbers inFigure 6 the obvious differences are observed concludingthat the cost time for the ELMmodel is less than the other twomodels To visualize the quality of the prediction predictedpenetration rates and residual errors by different models arecompared with themeasured ones for the overall data set andshown in Figures 7 and 8 respectively This low deviationobtained by three models also proves that ANN ELM andUSA are competent for ROP prediction Thus the ELM andUSA methods can both achieve high accuracy and maintainhigh running speeds This study shows that ELM technologyis a promising tool for ROP prediction and this work can

be incorporated into a software system that can be used indrilling optimization guidance

6 Conclusions

(1) This study investigates the feasibility of ELM and itsvariation the USA model for predicting the ROPof offshore drilling operations using recoded dataThis ELM and USA models can successfully predictROP in real time and this prediction can be usedas a reference range for drilling engineers identifyingdrilling problems and making decisions

(2) Choosing the relevant parameters is typically diffi-cult and previous literature informed input selec-tion Input rock material properties are the uniaxialcompressive strengths of the different rock typesformation drillability and abrasiveness which canbe calculated from lab cores and well logging dataMoreover the pump pressure and differential pres-sure are selected as important hydraulic parametersthat affect ROP while RPM and WOB are treated asthe equipment operational parameters In additionto these parameters drilling mud properties suchas mud viscosity and bit related parameters arealso involved in developing neural network modelsHowever for systems with data values beyond therange of this study or for a new bit it is necessary todevelop new neural networks Conversely the widerrange of input can enable the neural network modelsdeveloped to have wider applications in select cases

(3) The results of the USAmodel were compared to thoseof the classical ANN and ELM models Performanceof the models was checked by using 1198772 MAE RMSEand VAF performance indicators According to theperformance indicators all of the neural networksare competent for ROP prediction but the predictionperformances of the USA model were found to bebetter than those of the other twomodels with respectto accuracy while the ELMmodel had the lowest run-ning speed Compared to conventional ANNmodelsthe result of this work shows the potential of some fastand flexible neural network approaches to model theROP with high accuracy while maintaining runningspeed Therefore drilling engineers can make better

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

2 Mathematical Problems in Engineering

Table 1 Summary of ROP prediction models with artificial intelligence

Reference Model Inputnumber Input layer Output layer

Zhang et al [22] AHP and BPANN 9 UCS bit size bit type drillability coefficient gross hours drilledWOB RPM drilling mud density and AV (Apparent Viscosity) ROP

Jahanbakhshiet al [13] ANN 20

Differential pressure hydraulics hole depth pump pressure densityof the overlying rock equivalent circulating density hole size

formation drillability permeability and porosity drilling fluid typeplastic viscosity of mud yield point of mud initial gel strength ofmud 10min Gel strength of mud bit type and its properties weight

on the bit and rotary speed bit wear and bit hydraulic power

ROP

Bahari andSeyed [11] Fuzzy 4 UCS rock quality designation bit Load and bit rotation ROP

Amar andIbrahim [25]

Radial-basisfunction and ELM 7 Depth bit weight rotary speed tooth wear Reynolds number

function ECD and pore gradient ROP

Bilgesu et al [12] ANN 9 Formation drillability formation abrasiveness bearing wear toothwear pump rate rotating time rotary torque WOB and rotary speed ROP

Bataee andMohseni [2] ANN 4 Bit diameter WOB RPM and mud weight ROP

Moran et al [26] ANN 6 Rock strength rock type abrasion WOB RPM and mud weight ROP and wear

Monazami et al[27] ANN 13

Drill collar outside diameter drill collar length kick of pointazimuth inclination angle weight on bit flow rate bit rotation speedmud weight solid percent plastic viscosity yield point and measured

depth

ROP

a certain degree of success [6ndash9] As a matter of fact thereis no exact relation between ROP rock mass propertiesand different drilling variables because not only do a largenumber of uncertain drilling variables influence ROPbut alsothe relationships between ROP and the affecting factors arecomplex and highly nonlinear Thus some soft computingtechnologies such as neural networks have been applied toROP prediction in recent years [10] The flexibility of thismethod allows engineers to analyze a wide range of infor-mation and deliver high-quality ROP prediction Bahari andSeyed applied GA to determine constant coefficients of Bour-goyne and Youngrsquos ROP model [11] Table 1 shows currentROPpredictionmodels with artificial intelligenceThe resultsrevealed that the ANN model exhibited better performanceto predict penetration rate than the prediction performanceof multiple regression models In addition neural networkscan yield good correlation coefficients even if fewer data areavailable from filed measurements [12ndash15] However tradi-tional ANN still has some significant shortcomings such as aslow training process and the possibility of trapping in localextrema which lead to dissatisfactory predictionsThereforeit is necessary to introduce new algorithms that can poten-tially further improve ROP prediction accuracy

The extreme learning machine (ELM) proposed byHuang et al is a fast algorithm for single hidden-layer feedfor-ward neural networks (SLFNs) [16 17] The way ELM trainsSLFN is that it first randomly generates theweights of the hid-den layer and then calculates the weights of the output layerby solving a linear system using the least square methodThislearning algorithm is extremely fast and has good predictionaccuracy It has been proved in theory and in practice that this

algorithm can generate good generalization performance inmost cases at speeds much faster than traditionally popularfeedforward neural network learning algorithms Until nowELM has been widely studied and applied extensively byresearchers and it has demonstrated good generalization andprediction performance inmany real-life applications [18 19]Many algorithms such as evolutionary ELM and enhancedrandom search-based incremental ELM (EI-ELM) have beenproposed to optimize the network structure The above algo-rithms choose all or part of the hidden-layer weights ran-domly and select the candidate ones with the LSE (LeastSquares Estimation) However themodel parameters in thesealgorithms are not efficient enough because only the valueof the objective function is used in the search process USA(upper-layer-solution-aware algorithm) an improved ver-sion of the ELM algorithm gives an optimization of thenumber of hidden-layer nodes and the parameters modelingthe problem [20] Once the weights of the hidden layer aredetermined those of the output layer can be determined asa certain function using the closed-form solution Thereforewhat we need to search for at each epoch along the gradientdirection is just the weights of the hidden layer However thestudy of ELM and USA in drilling ROP prediction is rare

This paper concentrates on ROP estimation using bit typeand its properties mud type and mud viscosity formationparameters such as rock strength formation drillability andformation abrasiveness and some critical drilling equipmentoperational parameters such as pump pressure WOB androtary speed based on the previous drilled wells data with theELM and USA model The developed ELM and USA modelare shown to be efficientwith respect to accuracy and running

Mathematical Problems in Engineering 3

time compared to traditionalANNmodelsThey thus providea more reliable and faster real-time tool for predicting ROPin new wells

2 Artificial Neural Networks andExtreme Learning Machines

21 Methodology of Artificial Neural Networks Artificialneural networks (ANNs) are efficient models in approximat-ing the unknown nonlinear functions seeking to simulatehuman brain behavior by processing data on a trial-and-errorbasis Because of their powerful ability in approximation andgeneralization ANNs have been widely used in petroleumengineering with applications to bit selection reservoircharacterization EOR (enhanced oil recovery) hydraulicfracturing candidate selection and so forth [21ndash24]

Thenetwork structures of ANNs aremade up of a numberof neurons which are distributed in layers based on theirdifferent functions Generally a complete neural networkconsists of three different types on layers namely an inputlayer one ormore hidden layers and an output layer in whicheach layer includes a preset number of neurons It has beenrigorously proved that ANN can approximate any continuousfunction with an arbitrary precision And ANN is thus a uni-versal approximator The direction of the information trans-mission in feedforward neural networks is from input neu-rons through activation of the hidden neurons to the outputsFor supervised learning the connecting weights of the layersare updated in the training procedure by minimizing theobjective function between the networksrsquo actual outputs andthe desired value Actually there are many different ways tofulfill the training of an ANN model back-propagation (BP)algorithm is themost popular one It is a typical type of super-vised learning strategy in machine learning field In generalthe BP method uses the following steps for a given specificinput sample the actual output is obtained through the infor-mation transferring on layers one by one If the error betweenthe produced and the desired outputs is acceptable then stoptraining If the error is not acceptable in the previous stepthen the weights are changed on the interconnections that gointo the output layer Next an error value is calculated for allof the neurons in the hidden layer that is just below the outputlayer Then the weights are adjusted for all interconnectionsthat go into the hidden layer The process is continued untilthe last layer of weights has been adjusted

Some essential shortcomings of BP method are continu-ally encountered in many real applications for example slowconvergence speed and prone to being stuck in a local min-imum Thus the standard BP algorithm sometimes showspoor performance on practical applications which mainlystems from employing the standard gradient descent methodto adjust the weights As a result there are many differentvariants that have emerged in recent decades The com-monly used variants based on different optimizing strategiesare with Levenberg-Marquardt (LM) the conjugate gradi-ent method with Powell-Beale Fletcher-Reeves and Polak-Ribiere updates the gradient descent method with momen-tum term penalty term and adaptive learning rate Bayesianregulation and scaled conjugate gradient [28]

Problem based optimization constraints

i L1

1 d

1 i L

middot middot middot

middot middot middotmiddot middot middot

xj

(ai bi)

Figure 1 Structure and schematic diagramof the ELMmethod usedin this paper

22 Methodology of Extreme Learning Machines

221 Conventional Extreme Learning Machines ELM wasoriginally proposed to train a single hidden-layer feedfor-ward neural network (SLFN) and later extended to moregeneralized SLFNs where the hidden layer may not bemade of homogeneous neurons Some obstacles that back-propagation neural networks faced such as the slow trainingspeed the sensitivity to the selection of the parameters andthe high possibility to be trapped in local minima can beavoided using ELM Figure 1 illustrates the structure of theELMmethod

Neural networks are used universally for feature extrac-tion clustering regression and classification and requirelittle human intervention in most cases Inspired by bio-logical learning characteristics to overcome the challengingproblems encountered by BP algorithms ELM developmentsets fit hidden-layer neurons while randomly choosing theparameters at the initialization period

The learning efficiency and effectiveness of ELM wereestablished in 2005 while its universal approximation capabi-lity was rigorously proved laterThe correspondence betweenspecific biological neural configurations consequently appea-red in the literature

Unlike other randomness-based training methodsnet-work models the hidden weights can be randomly deter-mined before the learning process and all of the hidden neu-rons are independent of the training samples and one anotherin ELM Once determined both the hidden neurons and thehidden weights need not be tuned although they are impor-tant and crucial for training Moreover ELM has good gener-alization ability as the architecture of ELM is robust enoughand has enough hidden neurons for the given problemswhich is not the case for conventional learning methodshighly dependent on the data

Because the connecting weights between the input andthe hidden layer do not need to be tuned we use a simple

4 Mathematical Problems in Engineering

model with one output node as an example and give theoutput of the ELM as follows

119891119871 (119909) = 119871sum119894=1

120573119894ℎ119894 (119909) = ℎ (119909) 120573 (1)

in which 120573 = [1205731 120573119871]119879 denotes the output weights con-necting the hidden layer and the output layer and ℎ(119909) =[ℎ1(119909) ℎ119871(119909)] is the hidden-layer output with respect toinput 119909 ℎ(119909) is a feature mapping In fact if the input is in119889-dimensional space ℎ(119909)maps it to 119871-dimensional hidden-layer feature space For binary classification applicationsusing ELM the function of the output is

119891119871 (119909) = sign [ℎ (119909) 120573] (2)

ELM often leads to a smaller training error with thesmaller norm of weights compared with the traditionallearning algorithmAccording to Bartlettrsquos theory the smallerweights make a great contribution to a better generalizationperformance of neural networks with similar training errorsWhat makes ELM particularly noteworthy is that the valuesof the weights need not be adjusted during the trainingThe values of the input weights are randomly chosen at thebeginning of the training procedure and stay fixed while theweights of the output layer are calculated by searching for theleast square solution of the following objective function

The training error and the norm of the output weightsELMminimization are as follows

Minimize 1003817100381710038171003817H120573 minus 1198791003817100381710038171003817210038171003817100381710038171205731003817100381710038171003817 (3)

whereH is the output matrix of the hidden layerThe main purpose of ELM is to minimize the training

error as well as the norm of connecting output layer weightsAccording to the theory of linear systems the weightsbetween the hidden layer and the output layer are calculatedusing the following equation [17]

120573 = Hdagger119879 (4)

where Hdagger is the Moore-Penrose pseudoinverse of matrix Hand 119879 = [1199051 1199052 119905119899]119879

We can summarize the ELM training algorithm as follows[17]

(1) Set the hidden node parameters randomly for exam-ple input weights 119886119894 and biases 119887119894 with the certainhidden nodes 119894 = 1 2 119871

(2) Calculate the hidden-layer output matrix H throughthe input and the weights as well as the biases

(3) Obtain the output weights matrix using (4)

The orthogonalizationmethod the iterativemethod singularvalue decomposition (SVD) and so forth can be usedto evaluate the Moore-Penrose pseudoinverse of a matrix[24] By the orthogonal projection method we can obtain

Hdagger = (H119879H)minus1H119879 when H119879H is nonsingular and Hdagger =H119879(HH119879)minus1 when HH119879 is nonsingular In light of ridgeregression theory we add a positive value to the diagonal ofH119879H orHH119879 to make the resultant solution more stable andtend to have better generalization performance

According to ELM theory there are many feature map-ping functions that ℎ(119909) may be designed to incorporatewhich gives ELM the ability to approximate any continuoustarget function The actual activation functions of humanbrain systems seem to be more like a nonlinear piecewisecontinuous stimulation though the actual functions remainunknown In fact the ELM model is based on two majortheorems universal approximation and classification ability

Theorem 1 (universal approximation theorem see [16 29])Given any bounded nonconstant piecewise continuous func-tion as the activation function in hidden neurons if by tuningparameters of the hidden neuron activation function SLFNscan approximate any target continuous function then for anycontinuous target function 119891(119909) and any randomly generatedfunction sequence ℎ119894(119909)119871119894=1 lim119871rarrinfinsum119871119894=1 120573119894ℎ119894(119909) minus 119891(119909) =0 holds with probability one given the appropriate outputweights 120573

According to the nonlinear activation function and piece-wise linear combination ELM can approximate any contin-uous mapping and divide arbitrary disjoint regions of anyshapes with enough randomly hidden neurons based onthe nonlinear piecewise continuous activation functions andtheir linear combinations In particular the theory basis forthe classification capability of ELM is as follows

Theorem2 (classification capability theorem see [17]) Givenfeature mapping ℎ(119909) if ℎ(119909) is dense in 119862(119877119889) or in 119862(119872)where 119872 is a compact set of 119877119889 then ELM with randomhidden-layer mapping ℎ(119909) can separate arbitrary disjointregions of any shapes in 119877119889 or in119872

This theorem implies that ELM with bounded noncon-stant piecewise continuous functions has universal approx-imation capability While it does not require updating theweights of hidden neurons in the training process it is worthmentioning that ELM ismore like the activation of the humanbrain with the randomly generating hidden weights withouttuning

222 Upper-Layer-Solution-Aware (USA) Algorithm ModelX = [x1 x119894 x119873] is given as the set of input vectorsand each vector is denoted by x119894 = [1199091119894 119909119895119894 119909119863119894] inwhich119863 is the dimension of the input vector and119873 is the totalsum of training samples Define 119871 as the number of hiddenunits and 119862 as the dimension of the output vector y119894 = U119879h119894is the output of the SHLNN Among them h119894 = 120590(W119879x119894) isdenoted as the hidden-layer output U is denoted as weight119871times119862matrix at the upper layerW is denoted as119863times119871 weightmatrix at the lower layer and 120590(sdot) is denoted as the kernelfunction

Mathematical Problems in Engineering 5

Note that if x119894 and h119894 are augmented with ones the biasterms are implicitly represented in the above formulations

T = [t1 t119894 t119873] is given as the target vectors andeach target is denoted by t119894 = [1199051119894 119905119895119894 119905119862119894]119879 and theparameters U andW are used to minimize the square error

119864 = Y minus T2 = Tr [(Y minus T) (Y minus T)119879] (5)

where Y = [1199101 119910119894 119910119873] Note that the hidden-layervaluesH = [h1 h119894 h119873] are also determined uniquelyif the lower-layer weights W are fixed After setting thegradient

120597119864120597W = 120597Tr [(U119879H minus T) (U119879H minus T)119879]120597U

= 2H (U119879H minus T)119879(6)

to zero the upper-layer weights U can be determined to theclosed-form solution [20]

U = (HH119879)minus1HT119879 (7)

Note that (7) defines an implicit constraint between the setof weights U and the set of weightsW through hidden-layeroutput H in the SHLNN This leads to a structure that ournew algorithms will use in optimizing the SHLNN

Regularization techniques need to be used in actualapplications when hidden-layer matrix H is sometimes ill-conditioned (ie HH119879 is singular) because solving (7) isrelatively simple The popular technique used in this study isbased on the ridge regression theory Even more specifically

by adding positive value I120583 to the diagonal of HH119879 (7) isconverted to

U = ( I120583 +HH119879)minus1HT119879 (8)

where I is the identity matrix and 120583 is a positive constant(regularization coefficient) that is used to control the degreeof regularization The final solution (8) actually minimizesU119879H minus T2 + 120583U2 in which 120583U2 is 1198712 regularizationterm Positive constant 120583 is the regularization coefficientIt represents the relative importance of complexity-penaltyterm U2 with respect to empirical risk U119879H minus T2 When120583 approaches zero the model tends to an unconstrainedoptimal problem and then the solutions would be completelydetermined from the training samples When 120583 is madeinfinitely large this means that the training samples are unre-liable imposing weight U to be sufficiently of small valuesIn real applications regularization parameter 120583 is set to bea value somewhere between these two cases The reasonableregularization parameter can efficiently affect both of thelearning and prediction performance (generalization ability)Solution (8) shows better stability and better generaliza-tion performance than (7) Whenever the pseudoinverse isinvolved solution (8) can be used throughout the paper

The principle of this algorithm is very simple Oncethe lower-layer weights are determined the upper-layerweights can be determined explicitly by using the closed-formsolution (8) Based on this solution at each epoch all weneed to adjust along the gradient direction are the lower-layerweights Gradient 120597119864120597W is obtained by considering upper-layer weights U under Wrsquos effect and the training objectivefunction is the square error We can obtain the gradient bytreating U as a function ofW and plugging (7) into criterion(5)

120597119864120597W = 120597Tr [(U119879H minus T) (U119879H minus T)119879]120597W = 120597Tr[([(HH119879)minus1HT119879]119879H minus T)([(HH119879)minus1HT119879]119879H minus T)119879]

120597W= 2X [H119879 ∘ (1 minusH)119879 ∘ [Hdagger (HT119879) (ΤΗdagger) minus T119879 (THdagger)]]

(9)

in which Hdagger = H119879(HH119879)minus1 is the pseudoinverse of H and ∘is the element-wise production

In the derivation of (9) we used the fact that HH119879 and(HH119879)minus1 are symmetric We also used the fact that

120597Tr [(HH119879)minus1HT119879TH119879]120597H119879

= minus2H119879 (HH119879)minus1HT119879TH119879 (HH119879)minus1+ 2T119879TH119879 (HH119879)minus1

(10)

This algorithm updates W by using the gradient that isdefined directly in (9) as

W119896+1 = W119896 + 120588 120597119864120597W (11)

where 120588 is the learning rate The learning rate here meansthe updating step which is widely used in solving optimalproblems It shows how far the weights need to be moved inthe direction of the given gradient 120597119864120597W and how to obtainthe newweightswhichminimizes the objective functionOnesimple way is to set a positive constant value in the entiretraining process In this paper we select the suitable learningrate by means of many different simulations This algorithm

6 Mathematical Problems in Engineering

uses the closed-form solution (8) to calculateU after updatingW

The algorithms proposed in this paper achieve signif-icantly better classification accuracy than ELM when thesame number of hidden units is used To achieve the sameclassification accuracy the algorithm requires only 116 of themodel and therefore less test time is needed

3 Input Data Selection

The target research four wells were drilled at the Bohai Baybasinwhich is the largest oil and gas production base offshorein China The oilfield has experienced many productionstages since 1963 and currently many projects are carriedout by China National Offshore Oil Corporation (CNOOC)and ConocoPhillips China (COPC) To effectively increaseproduction and decrease drilling cost drilling speed controlis necessary

Due to drilling requirements and the similarity of wellslocated close to one another collecting past data and utilizingthe data in a useful manner have an important impact ondrilling cost reductionThis means that optimum parametersare always in effect As a matter of fact there are variousfactors than can affect ROP Previous studies show that theROP is largely dependent on some critical parameters whichcan be classified into three types rigbit related parametersmud related parameters and formation parameters Therigbit and mud related parameters can be manipulated butthe formation parameters have to be handled differentlyTable 2 gives a brief classification of some important drillingparameters that affect ROP To predict and optimize theROP both controllable and uncontrollable parameters fromdrilling reports were collected from daily drilling reports labinvestigations well log and geological information

Because it is impossible to treat all of the relevant drillingparameters inROPprediction it is necessary to choose essen-tial data based on literature surveys and statistical analysis Itwas concluded that the rock properties the drilling machineparameters and mud properties relate to the ROP In fact atotal of 18 drilling parameters were gathered in this studydepth torque rotational per speed (RPM) weight on bitflow rates active PVT pump pressure hook load differentialpressure mud weight mud viscosity formation drillabilityformation abrasiveness UCS porosity bit size and bit type

(1) The bit typesize can affect ROP in a given for-mation Roller cone bits generally have three conesthat rotate around their axis while teeth are milledout of the matrix or inserted The teeth combinecrushing and shearing to fracture the rock Dragbits usually consist of a fixed cutter mechanism thatcan be cutting blades diamond stones or cuttersToday polycrystalline diamond compact (PDC) is themost commonly used drag bit with PDC diamondimpregnated or diamond hybrid cutters mountedon the bit bladesbody Drag bits disintegrate rockprimarily through shearing The difference in thedesign of roller cone and drag bits and differentrock disintegration methods requires different ROP

models Thus the effects of the bit type are includedin this model

(2) Operational parameters such as pump pressure anddifferential pressure are the most important oper-ational hydraulic parameters affecting ROP Pumppressure is sometimes increased to achieve a higherpenetration rate for a certain depth of advance whilekeeping rotation constant In addition the mechani-cal factors of weight on the bit and rotary speed havelinear relationships with ROP The increase in weighton the bit can push the bit teeth or cutters furtherdown into a formation and disintegrate more rocksresulting in faster ROP Bit rotation can be increasedto obtain higher penetration rate keeping the bit loadconstant

(3) Formation mechanical characteristics are uncontrol-lable variables for controlling ROP values In generaldecreasing the penetration rate is greatly attributedto increase of depth because by increasing the depthrock strength increases and porosity decreases More-over the larger indirect drilling resistance that resultsfrom larger UCS and hardness will also limit thedepth of cut for a given set of bit design and appliedoperating parametersThere are twomain reasons forselecting rock strength as the representative of rockproperties One is that the rock strength is calculatedby a function of depth density and porosity anotheris that this parameter can be computed by loggingdata or triaxial compressive experiments and scratchtests in a lab if core samples are available

(4) The aim of drilling mud is to remove the looserock chips away from the bit face while the bitscool Therefore drilling weight mud rheology andsolid content are confirmed as an important factorin ROP by some studies ROP basically decreasedby increasing mud viscosity solid content and mudweight

In addition to the above parameters some comprehensiveparameters such as formation drillability and abrasivenessalso need to be considered because these parameters canprovide some important information sources about ROPprediction that cannot be described by conventional drillingdata More importantly the model inputs should be easilyacquired in real time recorded from the respective measure-ment gauges on the rig Using additional parameters as inputscan result in a large network size and consequently slowdown running speed and efficiency Thus bit size bit typebit wear UCS formation drillability formation abrasivenessrotational per speed WOB pump pressure mud viscosityandmud density were selected as 11 inputs for neural networksimulation The data set used in this paper consists of morethan 5000 measurements taken from this area at differentwells and with different drilling rigs Figure 2 illustrates threetypes of PDC bits in the drilling operation in this area

It should be noted that data can very often be non-numeric Because neural networks cannot be trained withnonnumeric data it is necessary to translate nonnumeric

Mathematical Problems in Engineering 7

Table 2 ROP related drilling parameters classification

Rig and bit related parameters Formation parameters Drilling fluid propertiesWeight on bit (WOB) Local stresses Mud weightTorque Hardness ViscosityRotary speed (RPM) Mineralogy Filtrate lossFlow rates Porosity and permeability Solid contentPump stroke speed (SPM) Formation abrasiveness Gel strengthPump pressure Drillability Mud pHHook load Depth Yield pointBit wear TemperatureType of the bit Unconfined compressive strength (UCS)

Figure 2 Three types of PDC drilling bits

data into numeric data In this study formation drillabilityformation abrasiveness bit size and bit type are nonnu-meric Numbering classes are used in this study to performnonnumeric translation The formation drillability rangedbetween 30 and 75Thehighest number represents the highestdrillability and the lower drillability was for hard formationssuch as shales The formation abrasiveness ranged between1 and 8 where 8 signifies the highest abrasive formation Inaddition bit type is assigned between 1 and 12 where 12 isrelated to the bit type that has highest ROP average valueThe bit wear values are determined between 0 and 9 where9 indicates that a tooth is completely worn out Table 3 showsthe results for the recorded parameters and their ranges

4 Experimental Design

41 Model Architecture Determining the optimal size ofthe neural network model is complex An overly complexnetwork tends to memorize the data without learning togeneralize to new situations It concentrates too much on thedata presented for learning and tends toward modeling thenoise The total number of ROP in target wells is summedup to 5500 The training subset constitutes 75 of the totaldata and testing subsets that include 25 of the total dataAs mentioned above inputs of neural networks include 10

Table 3 Summary of results for the recorded parameters

Parameters RangesBit sizeinch 5813sim85Bit type 1ndash12Bit wear 0ndash9UCSpsi 2412sim21487Formation drillability 30ndash75Formation abrasiveness 1ndash8Rotational per speedrpm 84sim126WOB103 lbs 24sim69Drilling mud type 1sim3Mud viscositycp 1sim70Pump pressurepsi 1651ndash3765ROP 266sim1205parameters while the ROP is the only element in the outputlayer Figure 3 is the developed neural network architecture

42Model Performance Indicators Theperformances of neu-ral network models are assessed by means of the correction

8 Mathematical Problems in Engineering

Bit size

Bit type

UCS

Rate of penetration

Formation drillability

Pump pressure

Formation abrasiveness

Rotational per speed

WOB

Mud type

Mud viscosity

middotmiddotmiddot

middotmiddotmiddot

Figure 3 Schematic of architecture of the ELM network model Node number of the input layer is 10 and ROP are the only node at theoutput layer

coefficient (1198772) mean absolute error (MAE) root meansquare error (RMSE) and variance accounted for (VAF)performance indicators1198772 represents the proportion of the overall varianceexplained by the model which can be calculated as

1198772 = 1 minus sum119899119894=1 (119891 (119909119894) minus 119910119894)2sum119899119894=1 119891 (119909119894)2 minus sum119899119894=1 (119910119894)2 119899 (12)

These performance indicators can give a sense of howgood the performance of a predictive model is relative to theactual value

MSE can be described as

MSE = 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (13)

The RMSE is conventionally used as an error functionfor quality monitoring of the model Model performanceincreases as RMSE decreases RMSE can be calculated by thefollowing equation

RMSE = radic 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (14)

VAF is often used to evaluate the correctness of a modelby comparing the measured values with the estimated outputof the model VAF is computed by the following equation

VAF = (1 minus var [119891 (119909119894) minus 119910119894]var119891 (119909119894) ) times 100 (15)

where for (12)ndash(15)119910119894 is themeasured data and119891(119909119894)denotesthe predicted data 119909119894 119909119899 are the input parameters and 119899is the total data number

43 Model Parameters Selection As for the ANN ELM orUSA models the kernel function and the number of hiddennodes are critical parameters for neural network accuracy

The accuracy with different hidden layers was comparedin terms of RMSE performance indicators When the kernelfunction is determined then increase the hidden numbersincrementally until 100 is reached Figure 4(a) illustratesthe accuracy comparison with different nodes for the ANNELM and USA models respectively As for the ELM modelthe MSE between prediction results and measured valuesdecreases as the node number of the hidden layer graduallyincreases with more than 65 nodes As for the USA modelthe MSE between prediction results and measured valuesalso decreases as the node number of the hidden layergradually increases except for individual volatility points Asfor the ANNmodel the MSE between prediction results andmeasured values was observed in a fluctuant trend as thenode number of the hidden layer gradually increases whichindicates that the choice of hidden nodes can greatly affectfinal accuracy

Four types of kernels have been tested using a trainingdata set for the ELM model and they are the sigmoid func-tion radial-basis function hardlim function and triangularfunction Figure 4(b) shows the accuracy comparison withdifferent kernel functions for the ELM model Among thefour kernels for the ELM model the sigmoid-based modelcomes to the threshold first when the node number is set to40 while an overfitting problem appears as the node numberexceeds 60The hardlim-based triangular-based and radial-basis based models display a similar trend However itseems that the node number might exceed 60 when theminimum errors are close to the threshold for the hardlim-based model For the triangular-based model the RMSEreaches the lowest point at 311when the node number is setas 85 while the RMSE reaches the lowest point at 312 whenthe node number is set as 95 for radial-basis based modelsFigure 4(c) shows the accuracy comparison with differentkernel functions for the USAmodel Observe that the radial-basis basedmodel comes to the threshold first when the nodenumber is set as 65 As for the hardlim-based model it seemsthat the node number might exceed 60 when the minimum

Mathematical Problems in Engineering 9

ANN modelELM model

USA model

20 40 60 80 1000Number of hidden layers

0

5

10

15

20

25

30RM

SE

(a)

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

SigHardlim

RadbasTribas

(b)

SigHardlim

RadbasTribas

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

(c)

Figure 4 (a) The RMSE change with the increase of hidden layers (b) comparison of RMSE with different kernel functions for the ELMnetwork model (c) comparison of RMSE with different kernel functions for the USA network model

errors are close to the threshold which is similar to theELMmodelThe triangular-based and sigmoid-basedmodelsdisplay a similar trend the RMSE reaches the lowest pointat 311 when the node number is set as 80 for the sigmoidmodel while the RMSE reaches the lowest point at 348when the node number is set as 94 for the tribasis basedmodels

For comparison the classical ANN model was also usedin this paper and the sigmoid kernel is selected in theANN model The applied ANN architecture is a feedforwardneural network which is a network structure in whichthe information will propagate in one direction from input

to output A back-propagation algorithm with Levenberg-Marquardt training function has been used for trainingThis algorithm can approximate any nonlinear continuousfunction to an arbitrary accuracy Based on the experimentaltests and when the node number of the hidden layer is set as36 the ANN network model can obtain the best predictionaccuracy

5 Results and Discussion

When the parameters for the neural networkmodel are finallysettled the next step is to validate the model using a testing

10 Mathematical Problems in Engineering

Table 4 The calculated performance indicators for ANN ELM and USA model

Data set Performance indicator ANN ELM USA

Training

RMSE 151 095 078MAE 75 42 21VAF 9688 100 1001198772 091 093 098

Testing

RMSE 356 311 276MAE 127 97 765VAF 9188 9282 94521198772 090 092 094

data set In this section prediction performance is madebetween ANN ELM and USAmodels while training time isrecordedThemodel accuracy can be evaluated by comparingthe calculated RMSE VAF MAE and 1198772 performance indi-cators for training and testing data sets presented in Table 3Due to random subdivision processes the performances ofthe models for training and validation data sets are slightlydifferent As seen from Table 4 the prediction performancein terms of the predefined indices is better for the testing setthan for the training set Overall 1198772 values for the developedmodels were more than 095 at the training phase while inthe case of the conventional ANN at the testing phase 1198772was found to be 09 The high performance of the modelsfor the testing set can be considered to be an indicationof the good generalization capabilities of the models 1198772value greater than 09 indicates a very satisfactory modelperformance while 1198772 value between 08 and 09 indicatesan acceptable model Therefore all of the developed neuralnetworks are competent for ROP prediction 1198772 of 099 forboth the ELMandUSAmodels in the training phase indicateshigher accuracy compared to the ANN model (1198772 of 088)Moreover it also can be observed that the highest VAF of9452 the lowest RMSE of 276 andMAE of 765 belong to theUSAmodel indicating that theUSA basedmodel gives betterprediction performance than the other models In additionthe ELM model can also produce acceptable results whichyielded slightly fewer residual errors than the ANN modelsIn other words the deviation from the observed values ofpenetration rate predicted by ELM is less than the deviationof the ANNmodelrsquos prediction Figure 5 shows the regressionanalysis of the predicted and measured ROP by differentmodels in both training and testing phases According tothe running speed with the increase of hidden numbers inFigure 6 the obvious differences are observed concludingthat the cost time for the ELMmodel is less than the other twomodels To visualize the quality of the prediction predictedpenetration rates and residual errors by different models arecompared with themeasured ones for the overall data set andshown in Figures 7 and 8 respectively This low deviationobtained by three models also proves that ANN ELM andUSA are competent for ROP prediction Thus the ELM andUSA methods can both achieve high accuracy and maintainhigh running speeds This study shows that ELM technologyis a promising tool for ROP prediction and this work can

be incorporated into a software system that can be used indrilling optimization guidance

6 Conclusions

(1) This study investigates the feasibility of ELM and itsvariation the USA model for predicting the ROPof offshore drilling operations using recoded dataThis ELM and USA models can successfully predictROP in real time and this prediction can be usedas a reference range for drilling engineers identifyingdrilling problems and making decisions

(2) Choosing the relevant parameters is typically diffi-cult and previous literature informed input selec-tion Input rock material properties are the uniaxialcompressive strengths of the different rock typesformation drillability and abrasiveness which canbe calculated from lab cores and well logging dataMoreover the pump pressure and differential pres-sure are selected as important hydraulic parametersthat affect ROP while RPM and WOB are treated asthe equipment operational parameters In additionto these parameters drilling mud properties suchas mud viscosity and bit related parameters arealso involved in developing neural network modelsHowever for systems with data values beyond therange of this study or for a new bit it is necessary todevelop new neural networks Conversely the widerrange of input can enable the neural network modelsdeveloped to have wider applications in select cases

(3) The results of the USAmodel were compared to thoseof the classical ANN and ELM models Performanceof the models was checked by using 1198772 MAE RMSEand VAF performance indicators According to theperformance indicators all of the neural networksare competent for ROP prediction but the predictionperformances of the USA model were found to bebetter than those of the other twomodels with respectto accuracy while the ELMmodel had the lowest run-ning speed Compared to conventional ANNmodelsthe result of this work shows the potential of some fastand flexible neural network approaches to model theROP with high accuracy while maintaining runningspeed Therefore drilling engineers can make better

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 3

time compared to traditionalANNmodelsThey thus providea more reliable and faster real-time tool for predicting ROPin new wells

2 Artificial Neural Networks andExtreme Learning Machines

21 Methodology of Artificial Neural Networks Artificialneural networks (ANNs) are efficient models in approximat-ing the unknown nonlinear functions seeking to simulatehuman brain behavior by processing data on a trial-and-errorbasis Because of their powerful ability in approximation andgeneralization ANNs have been widely used in petroleumengineering with applications to bit selection reservoircharacterization EOR (enhanced oil recovery) hydraulicfracturing candidate selection and so forth [21ndash24]

Thenetwork structures of ANNs aremade up of a numberof neurons which are distributed in layers based on theirdifferent functions Generally a complete neural networkconsists of three different types on layers namely an inputlayer one ormore hidden layers and an output layer in whicheach layer includes a preset number of neurons It has beenrigorously proved that ANN can approximate any continuousfunction with an arbitrary precision And ANN is thus a uni-versal approximator The direction of the information trans-mission in feedforward neural networks is from input neu-rons through activation of the hidden neurons to the outputsFor supervised learning the connecting weights of the layersare updated in the training procedure by minimizing theobjective function between the networksrsquo actual outputs andthe desired value Actually there are many different ways tofulfill the training of an ANN model back-propagation (BP)algorithm is themost popular one It is a typical type of super-vised learning strategy in machine learning field In generalthe BP method uses the following steps for a given specificinput sample the actual output is obtained through the infor-mation transferring on layers one by one If the error betweenthe produced and the desired outputs is acceptable then stoptraining If the error is not acceptable in the previous stepthen the weights are changed on the interconnections that gointo the output layer Next an error value is calculated for allof the neurons in the hidden layer that is just below the outputlayer Then the weights are adjusted for all interconnectionsthat go into the hidden layer The process is continued untilthe last layer of weights has been adjusted

Some essential shortcomings of BP method are continu-ally encountered in many real applications for example slowconvergence speed and prone to being stuck in a local min-imum Thus the standard BP algorithm sometimes showspoor performance on practical applications which mainlystems from employing the standard gradient descent methodto adjust the weights As a result there are many differentvariants that have emerged in recent decades The com-monly used variants based on different optimizing strategiesare with Levenberg-Marquardt (LM) the conjugate gradi-ent method with Powell-Beale Fletcher-Reeves and Polak-Ribiere updates the gradient descent method with momen-tum term penalty term and adaptive learning rate Bayesianregulation and scaled conjugate gradient [28]

Problem based optimization constraints

i L1

1 d

1 i L

middot middot middot

middot middot middotmiddot middot middot

xj

(ai bi)

Figure 1 Structure and schematic diagramof the ELMmethod usedin this paper

22 Methodology of Extreme Learning Machines

221 Conventional Extreme Learning Machines ELM wasoriginally proposed to train a single hidden-layer feedfor-ward neural network (SLFN) and later extended to moregeneralized SLFNs where the hidden layer may not bemade of homogeneous neurons Some obstacles that back-propagation neural networks faced such as the slow trainingspeed the sensitivity to the selection of the parameters andthe high possibility to be trapped in local minima can beavoided using ELM Figure 1 illustrates the structure of theELMmethod

Neural networks are used universally for feature extrac-tion clustering regression and classification and requirelittle human intervention in most cases Inspired by bio-logical learning characteristics to overcome the challengingproblems encountered by BP algorithms ELM developmentsets fit hidden-layer neurons while randomly choosing theparameters at the initialization period

The learning efficiency and effectiveness of ELM wereestablished in 2005 while its universal approximation capabi-lity was rigorously proved laterThe correspondence betweenspecific biological neural configurations consequently appea-red in the literature

Unlike other randomness-based training methodsnet-work models the hidden weights can be randomly deter-mined before the learning process and all of the hidden neu-rons are independent of the training samples and one anotherin ELM Once determined both the hidden neurons and thehidden weights need not be tuned although they are impor-tant and crucial for training Moreover ELM has good gener-alization ability as the architecture of ELM is robust enoughand has enough hidden neurons for the given problemswhich is not the case for conventional learning methodshighly dependent on the data

Because the connecting weights between the input andthe hidden layer do not need to be tuned we use a simple

4 Mathematical Problems in Engineering

model with one output node as an example and give theoutput of the ELM as follows

119891119871 (119909) = 119871sum119894=1

120573119894ℎ119894 (119909) = ℎ (119909) 120573 (1)

in which 120573 = [1205731 120573119871]119879 denotes the output weights con-necting the hidden layer and the output layer and ℎ(119909) =[ℎ1(119909) ℎ119871(119909)] is the hidden-layer output with respect toinput 119909 ℎ(119909) is a feature mapping In fact if the input is in119889-dimensional space ℎ(119909)maps it to 119871-dimensional hidden-layer feature space For binary classification applicationsusing ELM the function of the output is

119891119871 (119909) = sign [ℎ (119909) 120573] (2)

ELM often leads to a smaller training error with thesmaller norm of weights compared with the traditionallearning algorithmAccording to Bartlettrsquos theory the smallerweights make a great contribution to a better generalizationperformance of neural networks with similar training errorsWhat makes ELM particularly noteworthy is that the valuesof the weights need not be adjusted during the trainingThe values of the input weights are randomly chosen at thebeginning of the training procedure and stay fixed while theweights of the output layer are calculated by searching for theleast square solution of the following objective function

The training error and the norm of the output weightsELMminimization are as follows

Minimize 1003817100381710038171003817H120573 minus 1198791003817100381710038171003817210038171003817100381710038171205731003817100381710038171003817 (3)

whereH is the output matrix of the hidden layerThe main purpose of ELM is to minimize the training

error as well as the norm of connecting output layer weightsAccording to the theory of linear systems the weightsbetween the hidden layer and the output layer are calculatedusing the following equation [17]

120573 = Hdagger119879 (4)

where Hdagger is the Moore-Penrose pseudoinverse of matrix Hand 119879 = [1199051 1199052 119905119899]119879

We can summarize the ELM training algorithm as follows[17]

(1) Set the hidden node parameters randomly for exam-ple input weights 119886119894 and biases 119887119894 with the certainhidden nodes 119894 = 1 2 119871

(2) Calculate the hidden-layer output matrix H throughthe input and the weights as well as the biases

(3) Obtain the output weights matrix using (4)

The orthogonalizationmethod the iterativemethod singularvalue decomposition (SVD) and so forth can be usedto evaluate the Moore-Penrose pseudoinverse of a matrix[24] By the orthogonal projection method we can obtain

Hdagger = (H119879H)minus1H119879 when H119879H is nonsingular and Hdagger =H119879(HH119879)minus1 when HH119879 is nonsingular In light of ridgeregression theory we add a positive value to the diagonal ofH119879H orHH119879 to make the resultant solution more stable andtend to have better generalization performance

According to ELM theory there are many feature map-ping functions that ℎ(119909) may be designed to incorporatewhich gives ELM the ability to approximate any continuoustarget function The actual activation functions of humanbrain systems seem to be more like a nonlinear piecewisecontinuous stimulation though the actual functions remainunknown In fact the ELM model is based on two majortheorems universal approximation and classification ability

Theorem 1 (universal approximation theorem see [16 29])Given any bounded nonconstant piecewise continuous func-tion as the activation function in hidden neurons if by tuningparameters of the hidden neuron activation function SLFNscan approximate any target continuous function then for anycontinuous target function 119891(119909) and any randomly generatedfunction sequence ℎ119894(119909)119871119894=1 lim119871rarrinfinsum119871119894=1 120573119894ℎ119894(119909) minus 119891(119909) =0 holds with probability one given the appropriate outputweights 120573

According to the nonlinear activation function and piece-wise linear combination ELM can approximate any contin-uous mapping and divide arbitrary disjoint regions of anyshapes with enough randomly hidden neurons based onthe nonlinear piecewise continuous activation functions andtheir linear combinations In particular the theory basis forthe classification capability of ELM is as follows

Theorem2 (classification capability theorem see [17]) Givenfeature mapping ℎ(119909) if ℎ(119909) is dense in 119862(119877119889) or in 119862(119872)where 119872 is a compact set of 119877119889 then ELM with randomhidden-layer mapping ℎ(119909) can separate arbitrary disjointregions of any shapes in 119877119889 or in119872

This theorem implies that ELM with bounded noncon-stant piecewise continuous functions has universal approx-imation capability While it does not require updating theweights of hidden neurons in the training process it is worthmentioning that ELM ismore like the activation of the humanbrain with the randomly generating hidden weights withouttuning

222 Upper-Layer-Solution-Aware (USA) Algorithm ModelX = [x1 x119894 x119873] is given as the set of input vectorsand each vector is denoted by x119894 = [1199091119894 119909119895119894 119909119863119894] inwhich119863 is the dimension of the input vector and119873 is the totalsum of training samples Define 119871 as the number of hiddenunits and 119862 as the dimension of the output vector y119894 = U119879h119894is the output of the SHLNN Among them h119894 = 120590(W119879x119894) isdenoted as the hidden-layer output U is denoted as weight119871times119862matrix at the upper layerW is denoted as119863times119871 weightmatrix at the lower layer and 120590(sdot) is denoted as the kernelfunction

Mathematical Problems in Engineering 5

Note that if x119894 and h119894 are augmented with ones the biasterms are implicitly represented in the above formulations

T = [t1 t119894 t119873] is given as the target vectors andeach target is denoted by t119894 = [1199051119894 119905119895119894 119905119862119894]119879 and theparameters U andW are used to minimize the square error

119864 = Y minus T2 = Tr [(Y minus T) (Y minus T)119879] (5)

where Y = [1199101 119910119894 119910119873] Note that the hidden-layervaluesH = [h1 h119894 h119873] are also determined uniquelyif the lower-layer weights W are fixed After setting thegradient

120597119864120597W = 120597Tr [(U119879H minus T) (U119879H minus T)119879]120597U

= 2H (U119879H minus T)119879(6)

to zero the upper-layer weights U can be determined to theclosed-form solution [20]

U = (HH119879)minus1HT119879 (7)

Note that (7) defines an implicit constraint between the setof weights U and the set of weightsW through hidden-layeroutput H in the SHLNN This leads to a structure that ournew algorithms will use in optimizing the SHLNN

Regularization techniques need to be used in actualapplications when hidden-layer matrix H is sometimes ill-conditioned (ie HH119879 is singular) because solving (7) isrelatively simple The popular technique used in this study isbased on the ridge regression theory Even more specifically

by adding positive value I120583 to the diagonal of HH119879 (7) isconverted to

U = ( I120583 +HH119879)minus1HT119879 (8)

where I is the identity matrix and 120583 is a positive constant(regularization coefficient) that is used to control the degreeof regularization The final solution (8) actually minimizesU119879H minus T2 + 120583U2 in which 120583U2 is 1198712 regularizationterm Positive constant 120583 is the regularization coefficientIt represents the relative importance of complexity-penaltyterm U2 with respect to empirical risk U119879H minus T2 When120583 approaches zero the model tends to an unconstrainedoptimal problem and then the solutions would be completelydetermined from the training samples When 120583 is madeinfinitely large this means that the training samples are unre-liable imposing weight U to be sufficiently of small valuesIn real applications regularization parameter 120583 is set to bea value somewhere between these two cases The reasonableregularization parameter can efficiently affect both of thelearning and prediction performance (generalization ability)Solution (8) shows better stability and better generaliza-tion performance than (7) Whenever the pseudoinverse isinvolved solution (8) can be used throughout the paper

The principle of this algorithm is very simple Oncethe lower-layer weights are determined the upper-layerweights can be determined explicitly by using the closed-formsolution (8) Based on this solution at each epoch all weneed to adjust along the gradient direction are the lower-layerweights Gradient 120597119864120597W is obtained by considering upper-layer weights U under Wrsquos effect and the training objectivefunction is the square error We can obtain the gradient bytreating U as a function ofW and plugging (7) into criterion(5)

120597119864120597W = 120597Tr [(U119879H minus T) (U119879H minus T)119879]120597W = 120597Tr[([(HH119879)minus1HT119879]119879H minus T)([(HH119879)minus1HT119879]119879H minus T)119879]

120597W= 2X [H119879 ∘ (1 minusH)119879 ∘ [Hdagger (HT119879) (ΤΗdagger) minus T119879 (THdagger)]]

(9)

in which Hdagger = H119879(HH119879)minus1 is the pseudoinverse of H and ∘is the element-wise production

In the derivation of (9) we used the fact that HH119879 and(HH119879)minus1 are symmetric We also used the fact that

120597Tr [(HH119879)minus1HT119879TH119879]120597H119879

= minus2H119879 (HH119879)minus1HT119879TH119879 (HH119879)minus1+ 2T119879TH119879 (HH119879)minus1

(10)

This algorithm updates W by using the gradient that isdefined directly in (9) as

W119896+1 = W119896 + 120588 120597119864120597W (11)

where 120588 is the learning rate The learning rate here meansthe updating step which is widely used in solving optimalproblems It shows how far the weights need to be moved inthe direction of the given gradient 120597119864120597W and how to obtainthe newweightswhichminimizes the objective functionOnesimple way is to set a positive constant value in the entiretraining process In this paper we select the suitable learningrate by means of many different simulations This algorithm

6 Mathematical Problems in Engineering

uses the closed-form solution (8) to calculateU after updatingW

The algorithms proposed in this paper achieve signif-icantly better classification accuracy than ELM when thesame number of hidden units is used To achieve the sameclassification accuracy the algorithm requires only 116 of themodel and therefore less test time is needed

3 Input Data Selection

The target research four wells were drilled at the Bohai Baybasinwhich is the largest oil and gas production base offshorein China The oilfield has experienced many productionstages since 1963 and currently many projects are carriedout by China National Offshore Oil Corporation (CNOOC)and ConocoPhillips China (COPC) To effectively increaseproduction and decrease drilling cost drilling speed controlis necessary

Due to drilling requirements and the similarity of wellslocated close to one another collecting past data and utilizingthe data in a useful manner have an important impact ondrilling cost reductionThis means that optimum parametersare always in effect As a matter of fact there are variousfactors than can affect ROP Previous studies show that theROP is largely dependent on some critical parameters whichcan be classified into three types rigbit related parametersmud related parameters and formation parameters Therigbit and mud related parameters can be manipulated butthe formation parameters have to be handled differentlyTable 2 gives a brief classification of some important drillingparameters that affect ROP To predict and optimize theROP both controllable and uncontrollable parameters fromdrilling reports were collected from daily drilling reports labinvestigations well log and geological information

Because it is impossible to treat all of the relevant drillingparameters inROPprediction it is necessary to choose essen-tial data based on literature surveys and statistical analysis Itwas concluded that the rock properties the drilling machineparameters and mud properties relate to the ROP In fact atotal of 18 drilling parameters were gathered in this studydepth torque rotational per speed (RPM) weight on bitflow rates active PVT pump pressure hook load differentialpressure mud weight mud viscosity formation drillabilityformation abrasiveness UCS porosity bit size and bit type

(1) The bit typesize can affect ROP in a given for-mation Roller cone bits generally have three conesthat rotate around their axis while teeth are milledout of the matrix or inserted The teeth combinecrushing and shearing to fracture the rock Dragbits usually consist of a fixed cutter mechanism thatcan be cutting blades diamond stones or cuttersToday polycrystalline diamond compact (PDC) is themost commonly used drag bit with PDC diamondimpregnated or diamond hybrid cutters mountedon the bit bladesbody Drag bits disintegrate rockprimarily through shearing The difference in thedesign of roller cone and drag bits and differentrock disintegration methods requires different ROP

models Thus the effects of the bit type are includedin this model

(2) Operational parameters such as pump pressure anddifferential pressure are the most important oper-ational hydraulic parameters affecting ROP Pumppressure is sometimes increased to achieve a higherpenetration rate for a certain depth of advance whilekeeping rotation constant In addition the mechani-cal factors of weight on the bit and rotary speed havelinear relationships with ROP The increase in weighton the bit can push the bit teeth or cutters furtherdown into a formation and disintegrate more rocksresulting in faster ROP Bit rotation can be increasedto obtain higher penetration rate keeping the bit loadconstant

(3) Formation mechanical characteristics are uncontrol-lable variables for controlling ROP values In generaldecreasing the penetration rate is greatly attributedto increase of depth because by increasing the depthrock strength increases and porosity decreases More-over the larger indirect drilling resistance that resultsfrom larger UCS and hardness will also limit thedepth of cut for a given set of bit design and appliedoperating parametersThere are twomain reasons forselecting rock strength as the representative of rockproperties One is that the rock strength is calculatedby a function of depth density and porosity anotheris that this parameter can be computed by loggingdata or triaxial compressive experiments and scratchtests in a lab if core samples are available

(4) The aim of drilling mud is to remove the looserock chips away from the bit face while the bitscool Therefore drilling weight mud rheology andsolid content are confirmed as an important factorin ROP by some studies ROP basically decreasedby increasing mud viscosity solid content and mudweight

In addition to the above parameters some comprehensiveparameters such as formation drillability and abrasivenessalso need to be considered because these parameters canprovide some important information sources about ROPprediction that cannot be described by conventional drillingdata More importantly the model inputs should be easilyacquired in real time recorded from the respective measure-ment gauges on the rig Using additional parameters as inputscan result in a large network size and consequently slowdown running speed and efficiency Thus bit size bit typebit wear UCS formation drillability formation abrasivenessrotational per speed WOB pump pressure mud viscosityandmud density were selected as 11 inputs for neural networksimulation The data set used in this paper consists of morethan 5000 measurements taken from this area at differentwells and with different drilling rigs Figure 2 illustrates threetypes of PDC bits in the drilling operation in this area

It should be noted that data can very often be non-numeric Because neural networks cannot be trained withnonnumeric data it is necessary to translate nonnumeric

Mathematical Problems in Engineering 7

Table 2 ROP related drilling parameters classification

Rig and bit related parameters Formation parameters Drilling fluid propertiesWeight on bit (WOB) Local stresses Mud weightTorque Hardness ViscosityRotary speed (RPM) Mineralogy Filtrate lossFlow rates Porosity and permeability Solid contentPump stroke speed (SPM) Formation abrasiveness Gel strengthPump pressure Drillability Mud pHHook load Depth Yield pointBit wear TemperatureType of the bit Unconfined compressive strength (UCS)

Figure 2 Three types of PDC drilling bits

data into numeric data In this study formation drillabilityformation abrasiveness bit size and bit type are nonnu-meric Numbering classes are used in this study to performnonnumeric translation The formation drillability rangedbetween 30 and 75Thehighest number represents the highestdrillability and the lower drillability was for hard formationssuch as shales The formation abrasiveness ranged between1 and 8 where 8 signifies the highest abrasive formation Inaddition bit type is assigned between 1 and 12 where 12 isrelated to the bit type that has highest ROP average valueThe bit wear values are determined between 0 and 9 where9 indicates that a tooth is completely worn out Table 3 showsthe results for the recorded parameters and their ranges

4 Experimental Design

41 Model Architecture Determining the optimal size ofthe neural network model is complex An overly complexnetwork tends to memorize the data without learning togeneralize to new situations It concentrates too much on thedata presented for learning and tends toward modeling thenoise The total number of ROP in target wells is summedup to 5500 The training subset constitutes 75 of the totaldata and testing subsets that include 25 of the total dataAs mentioned above inputs of neural networks include 10

Table 3 Summary of results for the recorded parameters

Parameters RangesBit sizeinch 5813sim85Bit type 1ndash12Bit wear 0ndash9UCSpsi 2412sim21487Formation drillability 30ndash75Formation abrasiveness 1ndash8Rotational per speedrpm 84sim126WOB103 lbs 24sim69Drilling mud type 1sim3Mud viscositycp 1sim70Pump pressurepsi 1651ndash3765ROP 266sim1205parameters while the ROP is the only element in the outputlayer Figure 3 is the developed neural network architecture

42Model Performance Indicators Theperformances of neu-ral network models are assessed by means of the correction

8 Mathematical Problems in Engineering

Bit size

Bit type

UCS

Rate of penetration

Formation drillability

Pump pressure

Formation abrasiveness

Rotational per speed

WOB

Mud type

Mud viscosity

middotmiddotmiddot

middotmiddotmiddot

Figure 3 Schematic of architecture of the ELM network model Node number of the input layer is 10 and ROP are the only node at theoutput layer

coefficient (1198772) mean absolute error (MAE) root meansquare error (RMSE) and variance accounted for (VAF)performance indicators1198772 represents the proportion of the overall varianceexplained by the model which can be calculated as

1198772 = 1 minus sum119899119894=1 (119891 (119909119894) minus 119910119894)2sum119899119894=1 119891 (119909119894)2 minus sum119899119894=1 (119910119894)2 119899 (12)

These performance indicators can give a sense of howgood the performance of a predictive model is relative to theactual value

MSE can be described as

MSE = 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (13)

The RMSE is conventionally used as an error functionfor quality monitoring of the model Model performanceincreases as RMSE decreases RMSE can be calculated by thefollowing equation

RMSE = radic 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (14)

VAF is often used to evaluate the correctness of a modelby comparing the measured values with the estimated outputof the model VAF is computed by the following equation

VAF = (1 minus var [119891 (119909119894) minus 119910119894]var119891 (119909119894) ) times 100 (15)

where for (12)ndash(15)119910119894 is themeasured data and119891(119909119894)denotesthe predicted data 119909119894 119909119899 are the input parameters and 119899is the total data number

43 Model Parameters Selection As for the ANN ELM orUSA models the kernel function and the number of hiddennodes are critical parameters for neural network accuracy

The accuracy with different hidden layers was comparedin terms of RMSE performance indicators When the kernelfunction is determined then increase the hidden numbersincrementally until 100 is reached Figure 4(a) illustratesthe accuracy comparison with different nodes for the ANNELM and USA models respectively As for the ELM modelthe MSE between prediction results and measured valuesdecreases as the node number of the hidden layer graduallyincreases with more than 65 nodes As for the USA modelthe MSE between prediction results and measured valuesalso decreases as the node number of the hidden layergradually increases except for individual volatility points Asfor the ANNmodel the MSE between prediction results andmeasured values was observed in a fluctuant trend as thenode number of the hidden layer gradually increases whichindicates that the choice of hidden nodes can greatly affectfinal accuracy

Four types of kernels have been tested using a trainingdata set for the ELM model and they are the sigmoid func-tion radial-basis function hardlim function and triangularfunction Figure 4(b) shows the accuracy comparison withdifferent kernel functions for the ELM model Among thefour kernels for the ELM model the sigmoid-based modelcomes to the threshold first when the node number is set to40 while an overfitting problem appears as the node numberexceeds 60The hardlim-based triangular-based and radial-basis based models display a similar trend However itseems that the node number might exceed 60 when theminimum errors are close to the threshold for the hardlim-based model For the triangular-based model the RMSEreaches the lowest point at 311when the node number is setas 85 while the RMSE reaches the lowest point at 312 whenthe node number is set as 95 for radial-basis based modelsFigure 4(c) shows the accuracy comparison with differentkernel functions for the USAmodel Observe that the radial-basis basedmodel comes to the threshold first when the nodenumber is set as 65 As for the hardlim-based model it seemsthat the node number might exceed 60 when the minimum

Mathematical Problems in Engineering 9

ANN modelELM model

USA model

20 40 60 80 1000Number of hidden layers

0

5

10

15

20

25

30RM

SE

(a)

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

SigHardlim

RadbasTribas

(b)

SigHardlim

RadbasTribas

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

(c)

Figure 4 (a) The RMSE change with the increase of hidden layers (b) comparison of RMSE with different kernel functions for the ELMnetwork model (c) comparison of RMSE with different kernel functions for the USA network model

errors are close to the threshold which is similar to theELMmodelThe triangular-based and sigmoid-basedmodelsdisplay a similar trend the RMSE reaches the lowest pointat 311 when the node number is set as 80 for the sigmoidmodel while the RMSE reaches the lowest point at 348when the node number is set as 94 for the tribasis basedmodels

For comparison the classical ANN model was also usedin this paper and the sigmoid kernel is selected in theANN model The applied ANN architecture is a feedforwardneural network which is a network structure in whichthe information will propagate in one direction from input

to output A back-propagation algorithm with Levenberg-Marquardt training function has been used for trainingThis algorithm can approximate any nonlinear continuousfunction to an arbitrary accuracy Based on the experimentaltests and when the node number of the hidden layer is set as36 the ANN network model can obtain the best predictionaccuracy

5 Results and Discussion

When the parameters for the neural networkmodel are finallysettled the next step is to validate the model using a testing

10 Mathematical Problems in Engineering

Table 4 The calculated performance indicators for ANN ELM and USA model

Data set Performance indicator ANN ELM USA

Training

RMSE 151 095 078MAE 75 42 21VAF 9688 100 1001198772 091 093 098

Testing

RMSE 356 311 276MAE 127 97 765VAF 9188 9282 94521198772 090 092 094

data set In this section prediction performance is madebetween ANN ELM and USAmodels while training time isrecordedThemodel accuracy can be evaluated by comparingthe calculated RMSE VAF MAE and 1198772 performance indi-cators for training and testing data sets presented in Table 3Due to random subdivision processes the performances ofthe models for training and validation data sets are slightlydifferent As seen from Table 4 the prediction performancein terms of the predefined indices is better for the testing setthan for the training set Overall 1198772 values for the developedmodels were more than 095 at the training phase while inthe case of the conventional ANN at the testing phase 1198772was found to be 09 The high performance of the modelsfor the testing set can be considered to be an indicationof the good generalization capabilities of the models 1198772value greater than 09 indicates a very satisfactory modelperformance while 1198772 value between 08 and 09 indicatesan acceptable model Therefore all of the developed neuralnetworks are competent for ROP prediction 1198772 of 099 forboth the ELMandUSAmodels in the training phase indicateshigher accuracy compared to the ANN model (1198772 of 088)Moreover it also can be observed that the highest VAF of9452 the lowest RMSE of 276 andMAE of 765 belong to theUSAmodel indicating that theUSA basedmodel gives betterprediction performance than the other models In additionthe ELM model can also produce acceptable results whichyielded slightly fewer residual errors than the ANN modelsIn other words the deviation from the observed values ofpenetration rate predicted by ELM is less than the deviationof the ANNmodelrsquos prediction Figure 5 shows the regressionanalysis of the predicted and measured ROP by differentmodels in both training and testing phases According tothe running speed with the increase of hidden numbers inFigure 6 the obvious differences are observed concludingthat the cost time for the ELMmodel is less than the other twomodels To visualize the quality of the prediction predictedpenetration rates and residual errors by different models arecompared with themeasured ones for the overall data set andshown in Figures 7 and 8 respectively This low deviationobtained by three models also proves that ANN ELM andUSA are competent for ROP prediction Thus the ELM andUSA methods can both achieve high accuracy and maintainhigh running speeds This study shows that ELM technologyis a promising tool for ROP prediction and this work can

be incorporated into a software system that can be used indrilling optimization guidance

6 Conclusions

(1) This study investigates the feasibility of ELM and itsvariation the USA model for predicting the ROPof offshore drilling operations using recoded dataThis ELM and USA models can successfully predictROP in real time and this prediction can be usedas a reference range for drilling engineers identifyingdrilling problems and making decisions

(2) Choosing the relevant parameters is typically diffi-cult and previous literature informed input selec-tion Input rock material properties are the uniaxialcompressive strengths of the different rock typesformation drillability and abrasiveness which canbe calculated from lab cores and well logging dataMoreover the pump pressure and differential pres-sure are selected as important hydraulic parametersthat affect ROP while RPM and WOB are treated asthe equipment operational parameters In additionto these parameters drilling mud properties suchas mud viscosity and bit related parameters arealso involved in developing neural network modelsHowever for systems with data values beyond therange of this study or for a new bit it is necessary todevelop new neural networks Conversely the widerrange of input can enable the neural network modelsdeveloped to have wider applications in select cases

(3) The results of the USAmodel were compared to thoseof the classical ANN and ELM models Performanceof the models was checked by using 1198772 MAE RMSEand VAF performance indicators According to theperformance indicators all of the neural networksare competent for ROP prediction but the predictionperformances of the USA model were found to bebetter than those of the other twomodels with respectto accuracy while the ELMmodel had the lowest run-ning speed Compared to conventional ANNmodelsthe result of this work shows the potential of some fastand flexible neural network approaches to model theROP with high accuracy while maintaining runningspeed Therefore drilling engineers can make better

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

4 Mathematical Problems in Engineering

model with one output node as an example and give theoutput of the ELM as follows

119891119871 (119909) = 119871sum119894=1

120573119894ℎ119894 (119909) = ℎ (119909) 120573 (1)

in which 120573 = [1205731 120573119871]119879 denotes the output weights con-necting the hidden layer and the output layer and ℎ(119909) =[ℎ1(119909) ℎ119871(119909)] is the hidden-layer output with respect toinput 119909 ℎ(119909) is a feature mapping In fact if the input is in119889-dimensional space ℎ(119909)maps it to 119871-dimensional hidden-layer feature space For binary classification applicationsusing ELM the function of the output is

119891119871 (119909) = sign [ℎ (119909) 120573] (2)

ELM often leads to a smaller training error with thesmaller norm of weights compared with the traditionallearning algorithmAccording to Bartlettrsquos theory the smallerweights make a great contribution to a better generalizationperformance of neural networks with similar training errorsWhat makes ELM particularly noteworthy is that the valuesof the weights need not be adjusted during the trainingThe values of the input weights are randomly chosen at thebeginning of the training procedure and stay fixed while theweights of the output layer are calculated by searching for theleast square solution of the following objective function

The training error and the norm of the output weightsELMminimization are as follows

Minimize 1003817100381710038171003817H120573 minus 1198791003817100381710038171003817210038171003817100381710038171205731003817100381710038171003817 (3)

whereH is the output matrix of the hidden layerThe main purpose of ELM is to minimize the training

error as well as the norm of connecting output layer weightsAccording to the theory of linear systems the weightsbetween the hidden layer and the output layer are calculatedusing the following equation [17]

120573 = Hdagger119879 (4)

where Hdagger is the Moore-Penrose pseudoinverse of matrix Hand 119879 = [1199051 1199052 119905119899]119879

We can summarize the ELM training algorithm as follows[17]

(1) Set the hidden node parameters randomly for exam-ple input weights 119886119894 and biases 119887119894 with the certainhidden nodes 119894 = 1 2 119871

(2) Calculate the hidden-layer output matrix H throughthe input and the weights as well as the biases

(3) Obtain the output weights matrix using (4)

The orthogonalizationmethod the iterativemethod singularvalue decomposition (SVD) and so forth can be usedto evaluate the Moore-Penrose pseudoinverse of a matrix[24] By the orthogonal projection method we can obtain

Hdagger = (H119879H)minus1H119879 when H119879H is nonsingular and Hdagger =H119879(HH119879)minus1 when HH119879 is nonsingular In light of ridgeregression theory we add a positive value to the diagonal ofH119879H orHH119879 to make the resultant solution more stable andtend to have better generalization performance

According to ELM theory there are many feature map-ping functions that ℎ(119909) may be designed to incorporatewhich gives ELM the ability to approximate any continuoustarget function The actual activation functions of humanbrain systems seem to be more like a nonlinear piecewisecontinuous stimulation though the actual functions remainunknown In fact the ELM model is based on two majortheorems universal approximation and classification ability

Theorem 1 (universal approximation theorem see [16 29])Given any bounded nonconstant piecewise continuous func-tion as the activation function in hidden neurons if by tuningparameters of the hidden neuron activation function SLFNscan approximate any target continuous function then for anycontinuous target function 119891(119909) and any randomly generatedfunction sequence ℎ119894(119909)119871119894=1 lim119871rarrinfinsum119871119894=1 120573119894ℎ119894(119909) minus 119891(119909) =0 holds with probability one given the appropriate outputweights 120573

According to the nonlinear activation function and piece-wise linear combination ELM can approximate any contin-uous mapping and divide arbitrary disjoint regions of anyshapes with enough randomly hidden neurons based onthe nonlinear piecewise continuous activation functions andtheir linear combinations In particular the theory basis forthe classification capability of ELM is as follows

Theorem2 (classification capability theorem see [17]) Givenfeature mapping ℎ(119909) if ℎ(119909) is dense in 119862(119877119889) or in 119862(119872)where 119872 is a compact set of 119877119889 then ELM with randomhidden-layer mapping ℎ(119909) can separate arbitrary disjointregions of any shapes in 119877119889 or in119872

This theorem implies that ELM with bounded noncon-stant piecewise continuous functions has universal approx-imation capability While it does not require updating theweights of hidden neurons in the training process it is worthmentioning that ELM ismore like the activation of the humanbrain with the randomly generating hidden weights withouttuning

222 Upper-Layer-Solution-Aware (USA) Algorithm ModelX = [x1 x119894 x119873] is given as the set of input vectorsand each vector is denoted by x119894 = [1199091119894 119909119895119894 119909119863119894] inwhich119863 is the dimension of the input vector and119873 is the totalsum of training samples Define 119871 as the number of hiddenunits and 119862 as the dimension of the output vector y119894 = U119879h119894is the output of the SHLNN Among them h119894 = 120590(W119879x119894) isdenoted as the hidden-layer output U is denoted as weight119871times119862matrix at the upper layerW is denoted as119863times119871 weightmatrix at the lower layer and 120590(sdot) is denoted as the kernelfunction

Mathematical Problems in Engineering 5

Note that if x119894 and h119894 are augmented with ones the biasterms are implicitly represented in the above formulations

T = [t1 t119894 t119873] is given as the target vectors andeach target is denoted by t119894 = [1199051119894 119905119895119894 119905119862119894]119879 and theparameters U andW are used to minimize the square error

119864 = Y minus T2 = Tr [(Y minus T) (Y minus T)119879] (5)

where Y = [1199101 119910119894 119910119873] Note that the hidden-layervaluesH = [h1 h119894 h119873] are also determined uniquelyif the lower-layer weights W are fixed After setting thegradient

120597119864120597W = 120597Tr [(U119879H minus T) (U119879H minus T)119879]120597U

= 2H (U119879H minus T)119879(6)

to zero the upper-layer weights U can be determined to theclosed-form solution [20]

U = (HH119879)minus1HT119879 (7)

Note that (7) defines an implicit constraint between the setof weights U and the set of weightsW through hidden-layeroutput H in the SHLNN This leads to a structure that ournew algorithms will use in optimizing the SHLNN

Regularization techniques need to be used in actualapplications when hidden-layer matrix H is sometimes ill-conditioned (ie HH119879 is singular) because solving (7) isrelatively simple The popular technique used in this study isbased on the ridge regression theory Even more specifically

by adding positive value I120583 to the diagonal of HH119879 (7) isconverted to

U = ( I120583 +HH119879)minus1HT119879 (8)

where I is the identity matrix and 120583 is a positive constant(regularization coefficient) that is used to control the degreeof regularization The final solution (8) actually minimizesU119879H minus T2 + 120583U2 in which 120583U2 is 1198712 regularizationterm Positive constant 120583 is the regularization coefficientIt represents the relative importance of complexity-penaltyterm U2 with respect to empirical risk U119879H minus T2 When120583 approaches zero the model tends to an unconstrainedoptimal problem and then the solutions would be completelydetermined from the training samples When 120583 is madeinfinitely large this means that the training samples are unre-liable imposing weight U to be sufficiently of small valuesIn real applications regularization parameter 120583 is set to bea value somewhere between these two cases The reasonableregularization parameter can efficiently affect both of thelearning and prediction performance (generalization ability)Solution (8) shows better stability and better generaliza-tion performance than (7) Whenever the pseudoinverse isinvolved solution (8) can be used throughout the paper

The principle of this algorithm is very simple Oncethe lower-layer weights are determined the upper-layerweights can be determined explicitly by using the closed-formsolution (8) Based on this solution at each epoch all weneed to adjust along the gradient direction are the lower-layerweights Gradient 120597119864120597W is obtained by considering upper-layer weights U under Wrsquos effect and the training objectivefunction is the square error We can obtain the gradient bytreating U as a function ofW and plugging (7) into criterion(5)

120597119864120597W = 120597Tr [(U119879H minus T) (U119879H minus T)119879]120597W = 120597Tr[([(HH119879)minus1HT119879]119879H minus T)([(HH119879)minus1HT119879]119879H minus T)119879]

120597W= 2X [H119879 ∘ (1 minusH)119879 ∘ [Hdagger (HT119879) (ΤΗdagger) minus T119879 (THdagger)]]

(9)

in which Hdagger = H119879(HH119879)minus1 is the pseudoinverse of H and ∘is the element-wise production

In the derivation of (9) we used the fact that HH119879 and(HH119879)minus1 are symmetric We also used the fact that

120597Tr [(HH119879)minus1HT119879TH119879]120597H119879

= minus2H119879 (HH119879)minus1HT119879TH119879 (HH119879)minus1+ 2T119879TH119879 (HH119879)minus1

(10)

This algorithm updates W by using the gradient that isdefined directly in (9) as

W119896+1 = W119896 + 120588 120597119864120597W (11)

where 120588 is the learning rate The learning rate here meansthe updating step which is widely used in solving optimalproblems It shows how far the weights need to be moved inthe direction of the given gradient 120597119864120597W and how to obtainthe newweightswhichminimizes the objective functionOnesimple way is to set a positive constant value in the entiretraining process In this paper we select the suitable learningrate by means of many different simulations This algorithm

6 Mathematical Problems in Engineering

uses the closed-form solution (8) to calculateU after updatingW

The algorithms proposed in this paper achieve signif-icantly better classification accuracy than ELM when thesame number of hidden units is used To achieve the sameclassification accuracy the algorithm requires only 116 of themodel and therefore less test time is needed

3 Input Data Selection

The target research four wells were drilled at the Bohai Baybasinwhich is the largest oil and gas production base offshorein China The oilfield has experienced many productionstages since 1963 and currently many projects are carriedout by China National Offshore Oil Corporation (CNOOC)and ConocoPhillips China (COPC) To effectively increaseproduction and decrease drilling cost drilling speed controlis necessary

Due to drilling requirements and the similarity of wellslocated close to one another collecting past data and utilizingthe data in a useful manner have an important impact ondrilling cost reductionThis means that optimum parametersare always in effect As a matter of fact there are variousfactors than can affect ROP Previous studies show that theROP is largely dependent on some critical parameters whichcan be classified into three types rigbit related parametersmud related parameters and formation parameters Therigbit and mud related parameters can be manipulated butthe formation parameters have to be handled differentlyTable 2 gives a brief classification of some important drillingparameters that affect ROP To predict and optimize theROP both controllable and uncontrollable parameters fromdrilling reports were collected from daily drilling reports labinvestigations well log and geological information

Because it is impossible to treat all of the relevant drillingparameters inROPprediction it is necessary to choose essen-tial data based on literature surveys and statistical analysis Itwas concluded that the rock properties the drilling machineparameters and mud properties relate to the ROP In fact atotal of 18 drilling parameters were gathered in this studydepth torque rotational per speed (RPM) weight on bitflow rates active PVT pump pressure hook load differentialpressure mud weight mud viscosity formation drillabilityformation abrasiveness UCS porosity bit size and bit type

(1) The bit typesize can affect ROP in a given for-mation Roller cone bits generally have three conesthat rotate around their axis while teeth are milledout of the matrix or inserted The teeth combinecrushing and shearing to fracture the rock Dragbits usually consist of a fixed cutter mechanism thatcan be cutting blades diamond stones or cuttersToday polycrystalline diamond compact (PDC) is themost commonly used drag bit with PDC diamondimpregnated or diamond hybrid cutters mountedon the bit bladesbody Drag bits disintegrate rockprimarily through shearing The difference in thedesign of roller cone and drag bits and differentrock disintegration methods requires different ROP

models Thus the effects of the bit type are includedin this model

(2) Operational parameters such as pump pressure anddifferential pressure are the most important oper-ational hydraulic parameters affecting ROP Pumppressure is sometimes increased to achieve a higherpenetration rate for a certain depth of advance whilekeeping rotation constant In addition the mechani-cal factors of weight on the bit and rotary speed havelinear relationships with ROP The increase in weighton the bit can push the bit teeth or cutters furtherdown into a formation and disintegrate more rocksresulting in faster ROP Bit rotation can be increasedto obtain higher penetration rate keeping the bit loadconstant

(3) Formation mechanical characteristics are uncontrol-lable variables for controlling ROP values In generaldecreasing the penetration rate is greatly attributedto increase of depth because by increasing the depthrock strength increases and porosity decreases More-over the larger indirect drilling resistance that resultsfrom larger UCS and hardness will also limit thedepth of cut for a given set of bit design and appliedoperating parametersThere are twomain reasons forselecting rock strength as the representative of rockproperties One is that the rock strength is calculatedby a function of depth density and porosity anotheris that this parameter can be computed by loggingdata or triaxial compressive experiments and scratchtests in a lab if core samples are available

(4) The aim of drilling mud is to remove the looserock chips away from the bit face while the bitscool Therefore drilling weight mud rheology andsolid content are confirmed as an important factorin ROP by some studies ROP basically decreasedby increasing mud viscosity solid content and mudweight

In addition to the above parameters some comprehensiveparameters such as formation drillability and abrasivenessalso need to be considered because these parameters canprovide some important information sources about ROPprediction that cannot be described by conventional drillingdata More importantly the model inputs should be easilyacquired in real time recorded from the respective measure-ment gauges on the rig Using additional parameters as inputscan result in a large network size and consequently slowdown running speed and efficiency Thus bit size bit typebit wear UCS formation drillability formation abrasivenessrotational per speed WOB pump pressure mud viscosityandmud density were selected as 11 inputs for neural networksimulation The data set used in this paper consists of morethan 5000 measurements taken from this area at differentwells and with different drilling rigs Figure 2 illustrates threetypes of PDC bits in the drilling operation in this area

It should be noted that data can very often be non-numeric Because neural networks cannot be trained withnonnumeric data it is necessary to translate nonnumeric

Mathematical Problems in Engineering 7

Table 2 ROP related drilling parameters classification

Rig and bit related parameters Formation parameters Drilling fluid propertiesWeight on bit (WOB) Local stresses Mud weightTorque Hardness ViscosityRotary speed (RPM) Mineralogy Filtrate lossFlow rates Porosity and permeability Solid contentPump stroke speed (SPM) Formation abrasiveness Gel strengthPump pressure Drillability Mud pHHook load Depth Yield pointBit wear TemperatureType of the bit Unconfined compressive strength (UCS)

Figure 2 Three types of PDC drilling bits

data into numeric data In this study formation drillabilityformation abrasiveness bit size and bit type are nonnu-meric Numbering classes are used in this study to performnonnumeric translation The formation drillability rangedbetween 30 and 75Thehighest number represents the highestdrillability and the lower drillability was for hard formationssuch as shales The formation abrasiveness ranged between1 and 8 where 8 signifies the highest abrasive formation Inaddition bit type is assigned between 1 and 12 where 12 isrelated to the bit type that has highest ROP average valueThe bit wear values are determined between 0 and 9 where9 indicates that a tooth is completely worn out Table 3 showsthe results for the recorded parameters and their ranges

4 Experimental Design

41 Model Architecture Determining the optimal size ofthe neural network model is complex An overly complexnetwork tends to memorize the data without learning togeneralize to new situations It concentrates too much on thedata presented for learning and tends toward modeling thenoise The total number of ROP in target wells is summedup to 5500 The training subset constitutes 75 of the totaldata and testing subsets that include 25 of the total dataAs mentioned above inputs of neural networks include 10

Table 3 Summary of results for the recorded parameters

Parameters RangesBit sizeinch 5813sim85Bit type 1ndash12Bit wear 0ndash9UCSpsi 2412sim21487Formation drillability 30ndash75Formation abrasiveness 1ndash8Rotational per speedrpm 84sim126WOB103 lbs 24sim69Drilling mud type 1sim3Mud viscositycp 1sim70Pump pressurepsi 1651ndash3765ROP 266sim1205parameters while the ROP is the only element in the outputlayer Figure 3 is the developed neural network architecture

42Model Performance Indicators Theperformances of neu-ral network models are assessed by means of the correction

8 Mathematical Problems in Engineering

Bit size

Bit type

UCS

Rate of penetration

Formation drillability

Pump pressure

Formation abrasiveness

Rotational per speed

WOB

Mud type

Mud viscosity

middotmiddotmiddot

middotmiddotmiddot

Figure 3 Schematic of architecture of the ELM network model Node number of the input layer is 10 and ROP are the only node at theoutput layer

coefficient (1198772) mean absolute error (MAE) root meansquare error (RMSE) and variance accounted for (VAF)performance indicators1198772 represents the proportion of the overall varianceexplained by the model which can be calculated as

1198772 = 1 minus sum119899119894=1 (119891 (119909119894) minus 119910119894)2sum119899119894=1 119891 (119909119894)2 minus sum119899119894=1 (119910119894)2 119899 (12)

These performance indicators can give a sense of howgood the performance of a predictive model is relative to theactual value

MSE can be described as

MSE = 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (13)

The RMSE is conventionally used as an error functionfor quality monitoring of the model Model performanceincreases as RMSE decreases RMSE can be calculated by thefollowing equation

RMSE = radic 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (14)

VAF is often used to evaluate the correctness of a modelby comparing the measured values with the estimated outputof the model VAF is computed by the following equation

VAF = (1 minus var [119891 (119909119894) minus 119910119894]var119891 (119909119894) ) times 100 (15)

where for (12)ndash(15)119910119894 is themeasured data and119891(119909119894)denotesthe predicted data 119909119894 119909119899 are the input parameters and 119899is the total data number

43 Model Parameters Selection As for the ANN ELM orUSA models the kernel function and the number of hiddennodes are critical parameters for neural network accuracy

The accuracy with different hidden layers was comparedin terms of RMSE performance indicators When the kernelfunction is determined then increase the hidden numbersincrementally until 100 is reached Figure 4(a) illustratesthe accuracy comparison with different nodes for the ANNELM and USA models respectively As for the ELM modelthe MSE between prediction results and measured valuesdecreases as the node number of the hidden layer graduallyincreases with more than 65 nodes As for the USA modelthe MSE between prediction results and measured valuesalso decreases as the node number of the hidden layergradually increases except for individual volatility points Asfor the ANNmodel the MSE between prediction results andmeasured values was observed in a fluctuant trend as thenode number of the hidden layer gradually increases whichindicates that the choice of hidden nodes can greatly affectfinal accuracy

Four types of kernels have been tested using a trainingdata set for the ELM model and they are the sigmoid func-tion radial-basis function hardlim function and triangularfunction Figure 4(b) shows the accuracy comparison withdifferent kernel functions for the ELM model Among thefour kernels for the ELM model the sigmoid-based modelcomes to the threshold first when the node number is set to40 while an overfitting problem appears as the node numberexceeds 60The hardlim-based triangular-based and radial-basis based models display a similar trend However itseems that the node number might exceed 60 when theminimum errors are close to the threshold for the hardlim-based model For the triangular-based model the RMSEreaches the lowest point at 311when the node number is setas 85 while the RMSE reaches the lowest point at 312 whenthe node number is set as 95 for radial-basis based modelsFigure 4(c) shows the accuracy comparison with differentkernel functions for the USAmodel Observe that the radial-basis basedmodel comes to the threshold first when the nodenumber is set as 65 As for the hardlim-based model it seemsthat the node number might exceed 60 when the minimum

Mathematical Problems in Engineering 9

ANN modelELM model

USA model

20 40 60 80 1000Number of hidden layers

0

5

10

15

20

25

30RM

SE

(a)

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

SigHardlim

RadbasTribas

(b)

SigHardlim

RadbasTribas

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

(c)

Figure 4 (a) The RMSE change with the increase of hidden layers (b) comparison of RMSE with different kernel functions for the ELMnetwork model (c) comparison of RMSE with different kernel functions for the USA network model

errors are close to the threshold which is similar to theELMmodelThe triangular-based and sigmoid-basedmodelsdisplay a similar trend the RMSE reaches the lowest pointat 311 when the node number is set as 80 for the sigmoidmodel while the RMSE reaches the lowest point at 348when the node number is set as 94 for the tribasis basedmodels

For comparison the classical ANN model was also usedin this paper and the sigmoid kernel is selected in theANN model The applied ANN architecture is a feedforwardneural network which is a network structure in whichthe information will propagate in one direction from input

to output A back-propagation algorithm with Levenberg-Marquardt training function has been used for trainingThis algorithm can approximate any nonlinear continuousfunction to an arbitrary accuracy Based on the experimentaltests and when the node number of the hidden layer is set as36 the ANN network model can obtain the best predictionaccuracy

5 Results and Discussion

When the parameters for the neural networkmodel are finallysettled the next step is to validate the model using a testing

10 Mathematical Problems in Engineering

Table 4 The calculated performance indicators for ANN ELM and USA model

Data set Performance indicator ANN ELM USA

Training

RMSE 151 095 078MAE 75 42 21VAF 9688 100 1001198772 091 093 098

Testing

RMSE 356 311 276MAE 127 97 765VAF 9188 9282 94521198772 090 092 094

data set In this section prediction performance is madebetween ANN ELM and USAmodels while training time isrecordedThemodel accuracy can be evaluated by comparingthe calculated RMSE VAF MAE and 1198772 performance indi-cators for training and testing data sets presented in Table 3Due to random subdivision processes the performances ofthe models for training and validation data sets are slightlydifferent As seen from Table 4 the prediction performancein terms of the predefined indices is better for the testing setthan for the training set Overall 1198772 values for the developedmodels were more than 095 at the training phase while inthe case of the conventional ANN at the testing phase 1198772was found to be 09 The high performance of the modelsfor the testing set can be considered to be an indicationof the good generalization capabilities of the models 1198772value greater than 09 indicates a very satisfactory modelperformance while 1198772 value between 08 and 09 indicatesan acceptable model Therefore all of the developed neuralnetworks are competent for ROP prediction 1198772 of 099 forboth the ELMandUSAmodels in the training phase indicateshigher accuracy compared to the ANN model (1198772 of 088)Moreover it also can be observed that the highest VAF of9452 the lowest RMSE of 276 andMAE of 765 belong to theUSAmodel indicating that theUSA basedmodel gives betterprediction performance than the other models In additionthe ELM model can also produce acceptable results whichyielded slightly fewer residual errors than the ANN modelsIn other words the deviation from the observed values ofpenetration rate predicted by ELM is less than the deviationof the ANNmodelrsquos prediction Figure 5 shows the regressionanalysis of the predicted and measured ROP by differentmodels in both training and testing phases According tothe running speed with the increase of hidden numbers inFigure 6 the obvious differences are observed concludingthat the cost time for the ELMmodel is less than the other twomodels To visualize the quality of the prediction predictedpenetration rates and residual errors by different models arecompared with themeasured ones for the overall data set andshown in Figures 7 and 8 respectively This low deviationobtained by three models also proves that ANN ELM andUSA are competent for ROP prediction Thus the ELM andUSA methods can both achieve high accuracy and maintainhigh running speeds This study shows that ELM technologyis a promising tool for ROP prediction and this work can

be incorporated into a software system that can be used indrilling optimization guidance

6 Conclusions

(1) This study investigates the feasibility of ELM and itsvariation the USA model for predicting the ROPof offshore drilling operations using recoded dataThis ELM and USA models can successfully predictROP in real time and this prediction can be usedas a reference range for drilling engineers identifyingdrilling problems and making decisions

(2) Choosing the relevant parameters is typically diffi-cult and previous literature informed input selec-tion Input rock material properties are the uniaxialcompressive strengths of the different rock typesformation drillability and abrasiveness which canbe calculated from lab cores and well logging dataMoreover the pump pressure and differential pres-sure are selected as important hydraulic parametersthat affect ROP while RPM and WOB are treated asthe equipment operational parameters In additionto these parameters drilling mud properties suchas mud viscosity and bit related parameters arealso involved in developing neural network modelsHowever for systems with data values beyond therange of this study or for a new bit it is necessary todevelop new neural networks Conversely the widerrange of input can enable the neural network modelsdeveloped to have wider applications in select cases

(3) The results of the USAmodel were compared to thoseof the classical ANN and ELM models Performanceof the models was checked by using 1198772 MAE RMSEand VAF performance indicators According to theperformance indicators all of the neural networksare competent for ROP prediction but the predictionperformances of the USA model were found to bebetter than those of the other twomodels with respectto accuracy while the ELMmodel had the lowest run-ning speed Compared to conventional ANNmodelsthe result of this work shows the potential of some fastand flexible neural network approaches to model theROP with high accuracy while maintaining runningspeed Therefore drilling engineers can make better

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 5

Note that if x119894 and h119894 are augmented with ones the biasterms are implicitly represented in the above formulations

T = [t1 t119894 t119873] is given as the target vectors andeach target is denoted by t119894 = [1199051119894 119905119895119894 119905119862119894]119879 and theparameters U andW are used to minimize the square error

119864 = Y minus T2 = Tr [(Y minus T) (Y minus T)119879] (5)

where Y = [1199101 119910119894 119910119873] Note that the hidden-layervaluesH = [h1 h119894 h119873] are also determined uniquelyif the lower-layer weights W are fixed After setting thegradient

120597119864120597W = 120597Tr [(U119879H minus T) (U119879H minus T)119879]120597U

= 2H (U119879H minus T)119879(6)

to zero the upper-layer weights U can be determined to theclosed-form solution [20]

U = (HH119879)minus1HT119879 (7)

Note that (7) defines an implicit constraint between the setof weights U and the set of weightsW through hidden-layeroutput H in the SHLNN This leads to a structure that ournew algorithms will use in optimizing the SHLNN

Regularization techniques need to be used in actualapplications when hidden-layer matrix H is sometimes ill-conditioned (ie HH119879 is singular) because solving (7) isrelatively simple The popular technique used in this study isbased on the ridge regression theory Even more specifically

by adding positive value I120583 to the diagonal of HH119879 (7) isconverted to

U = ( I120583 +HH119879)minus1HT119879 (8)

where I is the identity matrix and 120583 is a positive constant(regularization coefficient) that is used to control the degreeof regularization The final solution (8) actually minimizesU119879H minus T2 + 120583U2 in which 120583U2 is 1198712 regularizationterm Positive constant 120583 is the regularization coefficientIt represents the relative importance of complexity-penaltyterm U2 with respect to empirical risk U119879H minus T2 When120583 approaches zero the model tends to an unconstrainedoptimal problem and then the solutions would be completelydetermined from the training samples When 120583 is madeinfinitely large this means that the training samples are unre-liable imposing weight U to be sufficiently of small valuesIn real applications regularization parameter 120583 is set to bea value somewhere between these two cases The reasonableregularization parameter can efficiently affect both of thelearning and prediction performance (generalization ability)Solution (8) shows better stability and better generaliza-tion performance than (7) Whenever the pseudoinverse isinvolved solution (8) can be used throughout the paper

The principle of this algorithm is very simple Oncethe lower-layer weights are determined the upper-layerweights can be determined explicitly by using the closed-formsolution (8) Based on this solution at each epoch all weneed to adjust along the gradient direction are the lower-layerweights Gradient 120597119864120597W is obtained by considering upper-layer weights U under Wrsquos effect and the training objectivefunction is the square error We can obtain the gradient bytreating U as a function ofW and plugging (7) into criterion(5)

120597119864120597W = 120597Tr [(U119879H minus T) (U119879H minus T)119879]120597W = 120597Tr[([(HH119879)minus1HT119879]119879H minus T)([(HH119879)minus1HT119879]119879H minus T)119879]

120597W= 2X [H119879 ∘ (1 minusH)119879 ∘ [Hdagger (HT119879) (ΤΗdagger) minus T119879 (THdagger)]]

(9)

in which Hdagger = H119879(HH119879)minus1 is the pseudoinverse of H and ∘is the element-wise production

In the derivation of (9) we used the fact that HH119879 and(HH119879)minus1 are symmetric We also used the fact that

120597Tr [(HH119879)minus1HT119879TH119879]120597H119879

= minus2H119879 (HH119879)minus1HT119879TH119879 (HH119879)minus1+ 2T119879TH119879 (HH119879)minus1

(10)

This algorithm updates W by using the gradient that isdefined directly in (9) as

W119896+1 = W119896 + 120588 120597119864120597W (11)

where 120588 is the learning rate The learning rate here meansthe updating step which is widely used in solving optimalproblems It shows how far the weights need to be moved inthe direction of the given gradient 120597119864120597W and how to obtainthe newweightswhichminimizes the objective functionOnesimple way is to set a positive constant value in the entiretraining process In this paper we select the suitable learningrate by means of many different simulations This algorithm

6 Mathematical Problems in Engineering

uses the closed-form solution (8) to calculateU after updatingW

The algorithms proposed in this paper achieve signif-icantly better classification accuracy than ELM when thesame number of hidden units is used To achieve the sameclassification accuracy the algorithm requires only 116 of themodel and therefore less test time is needed

3 Input Data Selection

The target research four wells were drilled at the Bohai Baybasinwhich is the largest oil and gas production base offshorein China The oilfield has experienced many productionstages since 1963 and currently many projects are carriedout by China National Offshore Oil Corporation (CNOOC)and ConocoPhillips China (COPC) To effectively increaseproduction and decrease drilling cost drilling speed controlis necessary

Due to drilling requirements and the similarity of wellslocated close to one another collecting past data and utilizingthe data in a useful manner have an important impact ondrilling cost reductionThis means that optimum parametersare always in effect As a matter of fact there are variousfactors than can affect ROP Previous studies show that theROP is largely dependent on some critical parameters whichcan be classified into three types rigbit related parametersmud related parameters and formation parameters Therigbit and mud related parameters can be manipulated butthe formation parameters have to be handled differentlyTable 2 gives a brief classification of some important drillingparameters that affect ROP To predict and optimize theROP both controllable and uncontrollable parameters fromdrilling reports were collected from daily drilling reports labinvestigations well log and geological information

Because it is impossible to treat all of the relevant drillingparameters inROPprediction it is necessary to choose essen-tial data based on literature surveys and statistical analysis Itwas concluded that the rock properties the drilling machineparameters and mud properties relate to the ROP In fact atotal of 18 drilling parameters were gathered in this studydepth torque rotational per speed (RPM) weight on bitflow rates active PVT pump pressure hook load differentialpressure mud weight mud viscosity formation drillabilityformation abrasiveness UCS porosity bit size and bit type

(1) The bit typesize can affect ROP in a given for-mation Roller cone bits generally have three conesthat rotate around their axis while teeth are milledout of the matrix or inserted The teeth combinecrushing and shearing to fracture the rock Dragbits usually consist of a fixed cutter mechanism thatcan be cutting blades diamond stones or cuttersToday polycrystalline diamond compact (PDC) is themost commonly used drag bit with PDC diamondimpregnated or diamond hybrid cutters mountedon the bit bladesbody Drag bits disintegrate rockprimarily through shearing The difference in thedesign of roller cone and drag bits and differentrock disintegration methods requires different ROP

models Thus the effects of the bit type are includedin this model

(2) Operational parameters such as pump pressure anddifferential pressure are the most important oper-ational hydraulic parameters affecting ROP Pumppressure is sometimes increased to achieve a higherpenetration rate for a certain depth of advance whilekeeping rotation constant In addition the mechani-cal factors of weight on the bit and rotary speed havelinear relationships with ROP The increase in weighton the bit can push the bit teeth or cutters furtherdown into a formation and disintegrate more rocksresulting in faster ROP Bit rotation can be increasedto obtain higher penetration rate keeping the bit loadconstant

(3) Formation mechanical characteristics are uncontrol-lable variables for controlling ROP values In generaldecreasing the penetration rate is greatly attributedto increase of depth because by increasing the depthrock strength increases and porosity decreases More-over the larger indirect drilling resistance that resultsfrom larger UCS and hardness will also limit thedepth of cut for a given set of bit design and appliedoperating parametersThere are twomain reasons forselecting rock strength as the representative of rockproperties One is that the rock strength is calculatedby a function of depth density and porosity anotheris that this parameter can be computed by loggingdata or triaxial compressive experiments and scratchtests in a lab if core samples are available

(4) The aim of drilling mud is to remove the looserock chips away from the bit face while the bitscool Therefore drilling weight mud rheology andsolid content are confirmed as an important factorin ROP by some studies ROP basically decreasedby increasing mud viscosity solid content and mudweight

In addition to the above parameters some comprehensiveparameters such as formation drillability and abrasivenessalso need to be considered because these parameters canprovide some important information sources about ROPprediction that cannot be described by conventional drillingdata More importantly the model inputs should be easilyacquired in real time recorded from the respective measure-ment gauges on the rig Using additional parameters as inputscan result in a large network size and consequently slowdown running speed and efficiency Thus bit size bit typebit wear UCS formation drillability formation abrasivenessrotational per speed WOB pump pressure mud viscosityandmud density were selected as 11 inputs for neural networksimulation The data set used in this paper consists of morethan 5000 measurements taken from this area at differentwells and with different drilling rigs Figure 2 illustrates threetypes of PDC bits in the drilling operation in this area

It should be noted that data can very often be non-numeric Because neural networks cannot be trained withnonnumeric data it is necessary to translate nonnumeric

Mathematical Problems in Engineering 7

Table 2 ROP related drilling parameters classification

Rig and bit related parameters Formation parameters Drilling fluid propertiesWeight on bit (WOB) Local stresses Mud weightTorque Hardness ViscosityRotary speed (RPM) Mineralogy Filtrate lossFlow rates Porosity and permeability Solid contentPump stroke speed (SPM) Formation abrasiveness Gel strengthPump pressure Drillability Mud pHHook load Depth Yield pointBit wear TemperatureType of the bit Unconfined compressive strength (UCS)

Figure 2 Three types of PDC drilling bits

data into numeric data In this study formation drillabilityformation abrasiveness bit size and bit type are nonnu-meric Numbering classes are used in this study to performnonnumeric translation The formation drillability rangedbetween 30 and 75Thehighest number represents the highestdrillability and the lower drillability was for hard formationssuch as shales The formation abrasiveness ranged between1 and 8 where 8 signifies the highest abrasive formation Inaddition bit type is assigned between 1 and 12 where 12 isrelated to the bit type that has highest ROP average valueThe bit wear values are determined between 0 and 9 where9 indicates that a tooth is completely worn out Table 3 showsthe results for the recorded parameters and their ranges

4 Experimental Design

41 Model Architecture Determining the optimal size ofthe neural network model is complex An overly complexnetwork tends to memorize the data without learning togeneralize to new situations It concentrates too much on thedata presented for learning and tends toward modeling thenoise The total number of ROP in target wells is summedup to 5500 The training subset constitutes 75 of the totaldata and testing subsets that include 25 of the total dataAs mentioned above inputs of neural networks include 10

Table 3 Summary of results for the recorded parameters

Parameters RangesBit sizeinch 5813sim85Bit type 1ndash12Bit wear 0ndash9UCSpsi 2412sim21487Formation drillability 30ndash75Formation abrasiveness 1ndash8Rotational per speedrpm 84sim126WOB103 lbs 24sim69Drilling mud type 1sim3Mud viscositycp 1sim70Pump pressurepsi 1651ndash3765ROP 266sim1205parameters while the ROP is the only element in the outputlayer Figure 3 is the developed neural network architecture

42Model Performance Indicators Theperformances of neu-ral network models are assessed by means of the correction

8 Mathematical Problems in Engineering

Bit size

Bit type

UCS

Rate of penetration

Formation drillability

Pump pressure

Formation abrasiveness

Rotational per speed

WOB

Mud type

Mud viscosity

middotmiddotmiddot

middotmiddotmiddot

Figure 3 Schematic of architecture of the ELM network model Node number of the input layer is 10 and ROP are the only node at theoutput layer

coefficient (1198772) mean absolute error (MAE) root meansquare error (RMSE) and variance accounted for (VAF)performance indicators1198772 represents the proportion of the overall varianceexplained by the model which can be calculated as

1198772 = 1 minus sum119899119894=1 (119891 (119909119894) minus 119910119894)2sum119899119894=1 119891 (119909119894)2 minus sum119899119894=1 (119910119894)2 119899 (12)

These performance indicators can give a sense of howgood the performance of a predictive model is relative to theactual value

MSE can be described as

MSE = 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (13)

The RMSE is conventionally used as an error functionfor quality monitoring of the model Model performanceincreases as RMSE decreases RMSE can be calculated by thefollowing equation

RMSE = radic 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (14)

VAF is often used to evaluate the correctness of a modelby comparing the measured values with the estimated outputof the model VAF is computed by the following equation

VAF = (1 minus var [119891 (119909119894) minus 119910119894]var119891 (119909119894) ) times 100 (15)

where for (12)ndash(15)119910119894 is themeasured data and119891(119909119894)denotesthe predicted data 119909119894 119909119899 are the input parameters and 119899is the total data number

43 Model Parameters Selection As for the ANN ELM orUSA models the kernel function and the number of hiddennodes are critical parameters for neural network accuracy

The accuracy with different hidden layers was comparedin terms of RMSE performance indicators When the kernelfunction is determined then increase the hidden numbersincrementally until 100 is reached Figure 4(a) illustratesthe accuracy comparison with different nodes for the ANNELM and USA models respectively As for the ELM modelthe MSE between prediction results and measured valuesdecreases as the node number of the hidden layer graduallyincreases with more than 65 nodes As for the USA modelthe MSE between prediction results and measured valuesalso decreases as the node number of the hidden layergradually increases except for individual volatility points Asfor the ANNmodel the MSE between prediction results andmeasured values was observed in a fluctuant trend as thenode number of the hidden layer gradually increases whichindicates that the choice of hidden nodes can greatly affectfinal accuracy

Four types of kernels have been tested using a trainingdata set for the ELM model and they are the sigmoid func-tion radial-basis function hardlim function and triangularfunction Figure 4(b) shows the accuracy comparison withdifferent kernel functions for the ELM model Among thefour kernels for the ELM model the sigmoid-based modelcomes to the threshold first when the node number is set to40 while an overfitting problem appears as the node numberexceeds 60The hardlim-based triangular-based and radial-basis based models display a similar trend However itseems that the node number might exceed 60 when theminimum errors are close to the threshold for the hardlim-based model For the triangular-based model the RMSEreaches the lowest point at 311when the node number is setas 85 while the RMSE reaches the lowest point at 312 whenthe node number is set as 95 for radial-basis based modelsFigure 4(c) shows the accuracy comparison with differentkernel functions for the USAmodel Observe that the radial-basis basedmodel comes to the threshold first when the nodenumber is set as 65 As for the hardlim-based model it seemsthat the node number might exceed 60 when the minimum

Mathematical Problems in Engineering 9

ANN modelELM model

USA model

20 40 60 80 1000Number of hidden layers

0

5

10

15

20

25

30RM

SE

(a)

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

SigHardlim

RadbasTribas

(b)

SigHardlim

RadbasTribas

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

(c)

Figure 4 (a) The RMSE change with the increase of hidden layers (b) comparison of RMSE with different kernel functions for the ELMnetwork model (c) comparison of RMSE with different kernel functions for the USA network model

errors are close to the threshold which is similar to theELMmodelThe triangular-based and sigmoid-basedmodelsdisplay a similar trend the RMSE reaches the lowest pointat 311 when the node number is set as 80 for the sigmoidmodel while the RMSE reaches the lowest point at 348when the node number is set as 94 for the tribasis basedmodels

For comparison the classical ANN model was also usedin this paper and the sigmoid kernel is selected in theANN model The applied ANN architecture is a feedforwardneural network which is a network structure in whichthe information will propagate in one direction from input

to output A back-propagation algorithm with Levenberg-Marquardt training function has been used for trainingThis algorithm can approximate any nonlinear continuousfunction to an arbitrary accuracy Based on the experimentaltests and when the node number of the hidden layer is set as36 the ANN network model can obtain the best predictionaccuracy

5 Results and Discussion

When the parameters for the neural networkmodel are finallysettled the next step is to validate the model using a testing

10 Mathematical Problems in Engineering

Table 4 The calculated performance indicators for ANN ELM and USA model

Data set Performance indicator ANN ELM USA

Training

RMSE 151 095 078MAE 75 42 21VAF 9688 100 1001198772 091 093 098

Testing

RMSE 356 311 276MAE 127 97 765VAF 9188 9282 94521198772 090 092 094

data set In this section prediction performance is madebetween ANN ELM and USAmodels while training time isrecordedThemodel accuracy can be evaluated by comparingthe calculated RMSE VAF MAE and 1198772 performance indi-cators for training and testing data sets presented in Table 3Due to random subdivision processes the performances ofthe models for training and validation data sets are slightlydifferent As seen from Table 4 the prediction performancein terms of the predefined indices is better for the testing setthan for the training set Overall 1198772 values for the developedmodels were more than 095 at the training phase while inthe case of the conventional ANN at the testing phase 1198772was found to be 09 The high performance of the modelsfor the testing set can be considered to be an indicationof the good generalization capabilities of the models 1198772value greater than 09 indicates a very satisfactory modelperformance while 1198772 value between 08 and 09 indicatesan acceptable model Therefore all of the developed neuralnetworks are competent for ROP prediction 1198772 of 099 forboth the ELMandUSAmodels in the training phase indicateshigher accuracy compared to the ANN model (1198772 of 088)Moreover it also can be observed that the highest VAF of9452 the lowest RMSE of 276 andMAE of 765 belong to theUSAmodel indicating that theUSA basedmodel gives betterprediction performance than the other models In additionthe ELM model can also produce acceptable results whichyielded slightly fewer residual errors than the ANN modelsIn other words the deviation from the observed values ofpenetration rate predicted by ELM is less than the deviationof the ANNmodelrsquos prediction Figure 5 shows the regressionanalysis of the predicted and measured ROP by differentmodels in both training and testing phases According tothe running speed with the increase of hidden numbers inFigure 6 the obvious differences are observed concludingthat the cost time for the ELMmodel is less than the other twomodels To visualize the quality of the prediction predictedpenetration rates and residual errors by different models arecompared with themeasured ones for the overall data set andshown in Figures 7 and 8 respectively This low deviationobtained by three models also proves that ANN ELM andUSA are competent for ROP prediction Thus the ELM andUSA methods can both achieve high accuracy and maintainhigh running speeds This study shows that ELM technologyis a promising tool for ROP prediction and this work can

be incorporated into a software system that can be used indrilling optimization guidance

6 Conclusions

(1) This study investigates the feasibility of ELM and itsvariation the USA model for predicting the ROPof offshore drilling operations using recoded dataThis ELM and USA models can successfully predictROP in real time and this prediction can be usedas a reference range for drilling engineers identifyingdrilling problems and making decisions

(2) Choosing the relevant parameters is typically diffi-cult and previous literature informed input selec-tion Input rock material properties are the uniaxialcompressive strengths of the different rock typesformation drillability and abrasiveness which canbe calculated from lab cores and well logging dataMoreover the pump pressure and differential pres-sure are selected as important hydraulic parametersthat affect ROP while RPM and WOB are treated asthe equipment operational parameters In additionto these parameters drilling mud properties suchas mud viscosity and bit related parameters arealso involved in developing neural network modelsHowever for systems with data values beyond therange of this study or for a new bit it is necessary todevelop new neural networks Conversely the widerrange of input can enable the neural network modelsdeveloped to have wider applications in select cases

(3) The results of the USAmodel were compared to thoseof the classical ANN and ELM models Performanceof the models was checked by using 1198772 MAE RMSEand VAF performance indicators According to theperformance indicators all of the neural networksare competent for ROP prediction but the predictionperformances of the USA model were found to bebetter than those of the other twomodels with respectto accuracy while the ELMmodel had the lowest run-ning speed Compared to conventional ANNmodelsthe result of this work shows the potential of some fastand flexible neural network approaches to model theROP with high accuracy while maintaining runningspeed Therefore drilling engineers can make better

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

6 Mathematical Problems in Engineering

uses the closed-form solution (8) to calculateU after updatingW

The algorithms proposed in this paper achieve signif-icantly better classification accuracy than ELM when thesame number of hidden units is used To achieve the sameclassification accuracy the algorithm requires only 116 of themodel and therefore less test time is needed

3 Input Data Selection

The target research four wells were drilled at the Bohai Baybasinwhich is the largest oil and gas production base offshorein China The oilfield has experienced many productionstages since 1963 and currently many projects are carriedout by China National Offshore Oil Corporation (CNOOC)and ConocoPhillips China (COPC) To effectively increaseproduction and decrease drilling cost drilling speed controlis necessary

Due to drilling requirements and the similarity of wellslocated close to one another collecting past data and utilizingthe data in a useful manner have an important impact ondrilling cost reductionThis means that optimum parametersare always in effect As a matter of fact there are variousfactors than can affect ROP Previous studies show that theROP is largely dependent on some critical parameters whichcan be classified into three types rigbit related parametersmud related parameters and formation parameters Therigbit and mud related parameters can be manipulated butthe formation parameters have to be handled differentlyTable 2 gives a brief classification of some important drillingparameters that affect ROP To predict and optimize theROP both controllable and uncontrollable parameters fromdrilling reports were collected from daily drilling reports labinvestigations well log and geological information

Because it is impossible to treat all of the relevant drillingparameters inROPprediction it is necessary to choose essen-tial data based on literature surveys and statistical analysis Itwas concluded that the rock properties the drilling machineparameters and mud properties relate to the ROP In fact atotal of 18 drilling parameters were gathered in this studydepth torque rotational per speed (RPM) weight on bitflow rates active PVT pump pressure hook load differentialpressure mud weight mud viscosity formation drillabilityformation abrasiveness UCS porosity bit size and bit type

(1) The bit typesize can affect ROP in a given for-mation Roller cone bits generally have three conesthat rotate around their axis while teeth are milledout of the matrix or inserted The teeth combinecrushing and shearing to fracture the rock Dragbits usually consist of a fixed cutter mechanism thatcan be cutting blades diamond stones or cuttersToday polycrystalline diamond compact (PDC) is themost commonly used drag bit with PDC diamondimpregnated or diamond hybrid cutters mountedon the bit bladesbody Drag bits disintegrate rockprimarily through shearing The difference in thedesign of roller cone and drag bits and differentrock disintegration methods requires different ROP

models Thus the effects of the bit type are includedin this model

(2) Operational parameters such as pump pressure anddifferential pressure are the most important oper-ational hydraulic parameters affecting ROP Pumppressure is sometimes increased to achieve a higherpenetration rate for a certain depth of advance whilekeeping rotation constant In addition the mechani-cal factors of weight on the bit and rotary speed havelinear relationships with ROP The increase in weighton the bit can push the bit teeth or cutters furtherdown into a formation and disintegrate more rocksresulting in faster ROP Bit rotation can be increasedto obtain higher penetration rate keeping the bit loadconstant

(3) Formation mechanical characteristics are uncontrol-lable variables for controlling ROP values In generaldecreasing the penetration rate is greatly attributedto increase of depth because by increasing the depthrock strength increases and porosity decreases More-over the larger indirect drilling resistance that resultsfrom larger UCS and hardness will also limit thedepth of cut for a given set of bit design and appliedoperating parametersThere are twomain reasons forselecting rock strength as the representative of rockproperties One is that the rock strength is calculatedby a function of depth density and porosity anotheris that this parameter can be computed by loggingdata or triaxial compressive experiments and scratchtests in a lab if core samples are available

(4) The aim of drilling mud is to remove the looserock chips away from the bit face while the bitscool Therefore drilling weight mud rheology andsolid content are confirmed as an important factorin ROP by some studies ROP basically decreasedby increasing mud viscosity solid content and mudweight

In addition to the above parameters some comprehensiveparameters such as formation drillability and abrasivenessalso need to be considered because these parameters canprovide some important information sources about ROPprediction that cannot be described by conventional drillingdata More importantly the model inputs should be easilyacquired in real time recorded from the respective measure-ment gauges on the rig Using additional parameters as inputscan result in a large network size and consequently slowdown running speed and efficiency Thus bit size bit typebit wear UCS formation drillability formation abrasivenessrotational per speed WOB pump pressure mud viscosityandmud density were selected as 11 inputs for neural networksimulation The data set used in this paper consists of morethan 5000 measurements taken from this area at differentwells and with different drilling rigs Figure 2 illustrates threetypes of PDC bits in the drilling operation in this area

It should be noted that data can very often be non-numeric Because neural networks cannot be trained withnonnumeric data it is necessary to translate nonnumeric

Mathematical Problems in Engineering 7

Table 2 ROP related drilling parameters classification

Rig and bit related parameters Formation parameters Drilling fluid propertiesWeight on bit (WOB) Local stresses Mud weightTorque Hardness ViscosityRotary speed (RPM) Mineralogy Filtrate lossFlow rates Porosity and permeability Solid contentPump stroke speed (SPM) Formation abrasiveness Gel strengthPump pressure Drillability Mud pHHook load Depth Yield pointBit wear TemperatureType of the bit Unconfined compressive strength (UCS)

Figure 2 Three types of PDC drilling bits

data into numeric data In this study formation drillabilityformation abrasiveness bit size and bit type are nonnu-meric Numbering classes are used in this study to performnonnumeric translation The formation drillability rangedbetween 30 and 75Thehighest number represents the highestdrillability and the lower drillability was for hard formationssuch as shales The formation abrasiveness ranged between1 and 8 where 8 signifies the highest abrasive formation Inaddition bit type is assigned between 1 and 12 where 12 isrelated to the bit type that has highest ROP average valueThe bit wear values are determined between 0 and 9 where9 indicates that a tooth is completely worn out Table 3 showsthe results for the recorded parameters and their ranges

4 Experimental Design

41 Model Architecture Determining the optimal size ofthe neural network model is complex An overly complexnetwork tends to memorize the data without learning togeneralize to new situations It concentrates too much on thedata presented for learning and tends toward modeling thenoise The total number of ROP in target wells is summedup to 5500 The training subset constitutes 75 of the totaldata and testing subsets that include 25 of the total dataAs mentioned above inputs of neural networks include 10

Table 3 Summary of results for the recorded parameters

Parameters RangesBit sizeinch 5813sim85Bit type 1ndash12Bit wear 0ndash9UCSpsi 2412sim21487Formation drillability 30ndash75Formation abrasiveness 1ndash8Rotational per speedrpm 84sim126WOB103 lbs 24sim69Drilling mud type 1sim3Mud viscositycp 1sim70Pump pressurepsi 1651ndash3765ROP 266sim1205parameters while the ROP is the only element in the outputlayer Figure 3 is the developed neural network architecture

42Model Performance Indicators Theperformances of neu-ral network models are assessed by means of the correction

8 Mathematical Problems in Engineering

Bit size

Bit type

UCS

Rate of penetration

Formation drillability

Pump pressure

Formation abrasiveness

Rotational per speed

WOB

Mud type

Mud viscosity

middotmiddotmiddot

middotmiddotmiddot

Figure 3 Schematic of architecture of the ELM network model Node number of the input layer is 10 and ROP are the only node at theoutput layer

coefficient (1198772) mean absolute error (MAE) root meansquare error (RMSE) and variance accounted for (VAF)performance indicators1198772 represents the proportion of the overall varianceexplained by the model which can be calculated as

1198772 = 1 minus sum119899119894=1 (119891 (119909119894) minus 119910119894)2sum119899119894=1 119891 (119909119894)2 minus sum119899119894=1 (119910119894)2 119899 (12)

These performance indicators can give a sense of howgood the performance of a predictive model is relative to theactual value

MSE can be described as

MSE = 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (13)

The RMSE is conventionally used as an error functionfor quality monitoring of the model Model performanceincreases as RMSE decreases RMSE can be calculated by thefollowing equation

RMSE = radic 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (14)

VAF is often used to evaluate the correctness of a modelby comparing the measured values with the estimated outputof the model VAF is computed by the following equation

VAF = (1 minus var [119891 (119909119894) minus 119910119894]var119891 (119909119894) ) times 100 (15)

where for (12)ndash(15)119910119894 is themeasured data and119891(119909119894)denotesthe predicted data 119909119894 119909119899 are the input parameters and 119899is the total data number

43 Model Parameters Selection As for the ANN ELM orUSA models the kernel function and the number of hiddennodes are critical parameters for neural network accuracy

The accuracy with different hidden layers was comparedin terms of RMSE performance indicators When the kernelfunction is determined then increase the hidden numbersincrementally until 100 is reached Figure 4(a) illustratesthe accuracy comparison with different nodes for the ANNELM and USA models respectively As for the ELM modelthe MSE between prediction results and measured valuesdecreases as the node number of the hidden layer graduallyincreases with more than 65 nodes As for the USA modelthe MSE between prediction results and measured valuesalso decreases as the node number of the hidden layergradually increases except for individual volatility points Asfor the ANNmodel the MSE between prediction results andmeasured values was observed in a fluctuant trend as thenode number of the hidden layer gradually increases whichindicates that the choice of hidden nodes can greatly affectfinal accuracy

Four types of kernels have been tested using a trainingdata set for the ELM model and they are the sigmoid func-tion radial-basis function hardlim function and triangularfunction Figure 4(b) shows the accuracy comparison withdifferent kernel functions for the ELM model Among thefour kernels for the ELM model the sigmoid-based modelcomes to the threshold first when the node number is set to40 while an overfitting problem appears as the node numberexceeds 60The hardlim-based triangular-based and radial-basis based models display a similar trend However itseems that the node number might exceed 60 when theminimum errors are close to the threshold for the hardlim-based model For the triangular-based model the RMSEreaches the lowest point at 311when the node number is setas 85 while the RMSE reaches the lowest point at 312 whenthe node number is set as 95 for radial-basis based modelsFigure 4(c) shows the accuracy comparison with differentkernel functions for the USAmodel Observe that the radial-basis basedmodel comes to the threshold first when the nodenumber is set as 65 As for the hardlim-based model it seemsthat the node number might exceed 60 when the minimum

Mathematical Problems in Engineering 9

ANN modelELM model

USA model

20 40 60 80 1000Number of hidden layers

0

5

10

15

20

25

30RM

SE

(a)

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

SigHardlim

RadbasTribas

(b)

SigHardlim

RadbasTribas

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

(c)

Figure 4 (a) The RMSE change with the increase of hidden layers (b) comparison of RMSE with different kernel functions for the ELMnetwork model (c) comparison of RMSE with different kernel functions for the USA network model

errors are close to the threshold which is similar to theELMmodelThe triangular-based and sigmoid-basedmodelsdisplay a similar trend the RMSE reaches the lowest pointat 311 when the node number is set as 80 for the sigmoidmodel while the RMSE reaches the lowest point at 348when the node number is set as 94 for the tribasis basedmodels

For comparison the classical ANN model was also usedin this paper and the sigmoid kernel is selected in theANN model The applied ANN architecture is a feedforwardneural network which is a network structure in whichthe information will propagate in one direction from input

to output A back-propagation algorithm with Levenberg-Marquardt training function has been used for trainingThis algorithm can approximate any nonlinear continuousfunction to an arbitrary accuracy Based on the experimentaltests and when the node number of the hidden layer is set as36 the ANN network model can obtain the best predictionaccuracy

5 Results and Discussion

When the parameters for the neural networkmodel are finallysettled the next step is to validate the model using a testing

10 Mathematical Problems in Engineering

Table 4 The calculated performance indicators for ANN ELM and USA model

Data set Performance indicator ANN ELM USA

Training

RMSE 151 095 078MAE 75 42 21VAF 9688 100 1001198772 091 093 098

Testing

RMSE 356 311 276MAE 127 97 765VAF 9188 9282 94521198772 090 092 094

data set In this section prediction performance is madebetween ANN ELM and USAmodels while training time isrecordedThemodel accuracy can be evaluated by comparingthe calculated RMSE VAF MAE and 1198772 performance indi-cators for training and testing data sets presented in Table 3Due to random subdivision processes the performances ofthe models for training and validation data sets are slightlydifferent As seen from Table 4 the prediction performancein terms of the predefined indices is better for the testing setthan for the training set Overall 1198772 values for the developedmodels were more than 095 at the training phase while inthe case of the conventional ANN at the testing phase 1198772was found to be 09 The high performance of the modelsfor the testing set can be considered to be an indicationof the good generalization capabilities of the models 1198772value greater than 09 indicates a very satisfactory modelperformance while 1198772 value between 08 and 09 indicatesan acceptable model Therefore all of the developed neuralnetworks are competent for ROP prediction 1198772 of 099 forboth the ELMandUSAmodels in the training phase indicateshigher accuracy compared to the ANN model (1198772 of 088)Moreover it also can be observed that the highest VAF of9452 the lowest RMSE of 276 andMAE of 765 belong to theUSAmodel indicating that theUSA basedmodel gives betterprediction performance than the other models In additionthe ELM model can also produce acceptable results whichyielded slightly fewer residual errors than the ANN modelsIn other words the deviation from the observed values ofpenetration rate predicted by ELM is less than the deviationof the ANNmodelrsquos prediction Figure 5 shows the regressionanalysis of the predicted and measured ROP by differentmodels in both training and testing phases According tothe running speed with the increase of hidden numbers inFigure 6 the obvious differences are observed concludingthat the cost time for the ELMmodel is less than the other twomodels To visualize the quality of the prediction predictedpenetration rates and residual errors by different models arecompared with themeasured ones for the overall data set andshown in Figures 7 and 8 respectively This low deviationobtained by three models also proves that ANN ELM andUSA are competent for ROP prediction Thus the ELM andUSA methods can both achieve high accuracy and maintainhigh running speeds This study shows that ELM technologyis a promising tool for ROP prediction and this work can

be incorporated into a software system that can be used indrilling optimization guidance

6 Conclusions

(1) This study investigates the feasibility of ELM and itsvariation the USA model for predicting the ROPof offshore drilling operations using recoded dataThis ELM and USA models can successfully predictROP in real time and this prediction can be usedas a reference range for drilling engineers identifyingdrilling problems and making decisions

(2) Choosing the relevant parameters is typically diffi-cult and previous literature informed input selec-tion Input rock material properties are the uniaxialcompressive strengths of the different rock typesformation drillability and abrasiveness which canbe calculated from lab cores and well logging dataMoreover the pump pressure and differential pres-sure are selected as important hydraulic parametersthat affect ROP while RPM and WOB are treated asthe equipment operational parameters In additionto these parameters drilling mud properties suchas mud viscosity and bit related parameters arealso involved in developing neural network modelsHowever for systems with data values beyond therange of this study or for a new bit it is necessary todevelop new neural networks Conversely the widerrange of input can enable the neural network modelsdeveloped to have wider applications in select cases

(3) The results of the USAmodel were compared to thoseof the classical ANN and ELM models Performanceof the models was checked by using 1198772 MAE RMSEand VAF performance indicators According to theperformance indicators all of the neural networksare competent for ROP prediction but the predictionperformances of the USA model were found to bebetter than those of the other twomodels with respectto accuracy while the ELMmodel had the lowest run-ning speed Compared to conventional ANNmodelsthe result of this work shows the potential of some fastand flexible neural network approaches to model theROP with high accuracy while maintaining runningspeed Therefore drilling engineers can make better

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 7

Table 2 ROP related drilling parameters classification

Rig and bit related parameters Formation parameters Drilling fluid propertiesWeight on bit (WOB) Local stresses Mud weightTorque Hardness ViscosityRotary speed (RPM) Mineralogy Filtrate lossFlow rates Porosity and permeability Solid contentPump stroke speed (SPM) Formation abrasiveness Gel strengthPump pressure Drillability Mud pHHook load Depth Yield pointBit wear TemperatureType of the bit Unconfined compressive strength (UCS)

Figure 2 Three types of PDC drilling bits

data into numeric data In this study formation drillabilityformation abrasiveness bit size and bit type are nonnu-meric Numbering classes are used in this study to performnonnumeric translation The formation drillability rangedbetween 30 and 75Thehighest number represents the highestdrillability and the lower drillability was for hard formationssuch as shales The formation abrasiveness ranged between1 and 8 where 8 signifies the highest abrasive formation Inaddition bit type is assigned between 1 and 12 where 12 isrelated to the bit type that has highest ROP average valueThe bit wear values are determined between 0 and 9 where9 indicates that a tooth is completely worn out Table 3 showsthe results for the recorded parameters and their ranges

4 Experimental Design

41 Model Architecture Determining the optimal size ofthe neural network model is complex An overly complexnetwork tends to memorize the data without learning togeneralize to new situations It concentrates too much on thedata presented for learning and tends toward modeling thenoise The total number of ROP in target wells is summedup to 5500 The training subset constitutes 75 of the totaldata and testing subsets that include 25 of the total dataAs mentioned above inputs of neural networks include 10

Table 3 Summary of results for the recorded parameters

Parameters RangesBit sizeinch 5813sim85Bit type 1ndash12Bit wear 0ndash9UCSpsi 2412sim21487Formation drillability 30ndash75Formation abrasiveness 1ndash8Rotational per speedrpm 84sim126WOB103 lbs 24sim69Drilling mud type 1sim3Mud viscositycp 1sim70Pump pressurepsi 1651ndash3765ROP 266sim1205parameters while the ROP is the only element in the outputlayer Figure 3 is the developed neural network architecture

42Model Performance Indicators Theperformances of neu-ral network models are assessed by means of the correction

8 Mathematical Problems in Engineering

Bit size

Bit type

UCS

Rate of penetration

Formation drillability

Pump pressure

Formation abrasiveness

Rotational per speed

WOB

Mud type

Mud viscosity

middotmiddotmiddot

middotmiddotmiddot

Figure 3 Schematic of architecture of the ELM network model Node number of the input layer is 10 and ROP are the only node at theoutput layer

coefficient (1198772) mean absolute error (MAE) root meansquare error (RMSE) and variance accounted for (VAF)performance indicators1198772 represents the proportion of the overall varianceexplained by the model which can be calculated as

1198772 = 1 minus sum119899119894=1 (119891 (119909119894) minus 119910119894)2sum119899119894=1 119891 (119909119894)2 minus sum119899119894=1 (119910119894)2 119899 (12)

These performance indicators can give a sense of howgood the performance of a predictive model is relative to theactual value

MSE can be described as

MSE = 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (13)

The RMSE is conventionally used as an error functionfor quality monitoring of the model Model performanceincreases as RMSE decreases RMSE can be calculated by thefollowing equation

RMSE = radic 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (14)

VAF is often used to evaluate the correctness of a modelby comparing the measured values with the estimated outputof the model VAF is computed by the following equation

VAF = (1 minus var [119891 (119909119894) minus 119910119894]var119891 (119909119894) ) times 100 (15)

where for (12)ndash(15)119910119894 is themeasured data and119891(119909119894)denotesthe predicted data 119909119894 119909119899 are the input parameters and 119899is the total data number

43 Model Parameters Selection As for the ANN ELM orUSA models the kernel function and the number of hiddennodes are critical parameters for neural network accuracy

The accuracy with different hidden layers was comparedin terms of RMSE performance indicators When the kernelfunction is determined then increase the hidden numbersincrementally until 100 is reached Figure 4(a) illustratesthe accuracy comparison with different nodes for the ANNELM and USA models respectively As for the ELM modelthe MSE between prediction results and measured valuesdecreases as the node number of the hidden layer graduallyincreases with more than 65 nodes As for the USA modelthe MSE between prediction results and measured valuesalso decreases as the node number of the hidden layergradually increases except for individual volatility points Asfor the ANNmodel the MSE between prediction results andmeasured values was observed in a fluctuant trend as thenode number of the hidden layer gradually increases whichindicates that the choice of hidden nodes can greatly affectfinal accuracy

Four types of kernels have been tested using a trainingdata set for the ELM model and they are the sigmoid func-tion radial-basis function hardlim function and triangularfunction Figure 4(b) shows the accuracy comparison withdifferent kernel functions for the ELM model Among thefour kernels for the ELM model the sigmoid-based modelcomes to the threshold first when the node number is set to40 while an overfitting problem appears as the node numberexceeds 60The hardlim-based triangular-based and radial-basis based models display a similar trend However itseems that the node number might exceed 60 when theminimum errors are close to the threshold for the hardlim-based model For the triangular-based model the RMSEreaches the lowest point at 311when the node number is setas 85 while the RMSE reaches the lowest point at 312 whenthe node number is set as 95 for radial-basis based modelsFigure 4(c) shows the accuracy comparison with differentkernel functions for the USAmodel Observe that the radial-basis basedmodel comes to the threshold first when the nodenumber is set as 65 As for the hardlim-based model it seemsthat the node number might exceed 60 when the minimum

Mathematical Problems in Engineering 9

ANN modelELM model

USA model

20 40 60 80 1000Number of hidden layers

0

5

10

15

20

25

30RM

SE

(a)

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

SigHardlim

RadbasTribas

(b)

SigHardlim

RadbasTribas

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

(c)

Figure 4 (a) The RMSE change with the increase of hidden layers (b) comparison of RMSE with different kernel functions for the ELMnetwork model (c) comparison of RMSE with different kernel functions for the USA network model

errors are close to the threshold which is similar to theELMmodelThe triangular-based and sigmoid-basedmodelsdisplay a similar trend the RMSE reaches the lowest pointat 311 when the node number is set as 80 for the sigmoidmodel while the RMSE reaches the lowest point at 348when the node number is set as 94 for the tribasis basedmodels

For comparison the classical ANN model was also usedin this paper and the sigmoid kernel is selected in theANN model The applied ANN architecture is a feedforwardneural network which is a network structure in whichthe information will propagate in one direction from input

to output A back-propagation algorithm with Levenberg-Marquardt training function has been used for trainingThis algorithm can approximate any nonlinear continuousfunction to an arbitrary accuracy Based on the experimentaltests and when the node number of the hidden layer is set as36 the ANN network model can obtain the best predictionaccuracy

5 Results and Discussion

When the parameters for the neural networkmodel are finallysettled the next step is to validate the model using a testing

10 Mathematical Problems in Engineering

Table 4 The calculated performance indicators for ANN ELM and USA model

Data set Performance indicator ANN ELM USA

Training

RMSE 151 095 078MAE 75 42 21VAF 9688 100 1001198772 091 093 098

Testing

RMSE 356 311 276MAE 127 97 765VAF 9188 9282 94521198772 090 092 094

data set In this section prediction performance is madebetween ANN ELM and USAmodels while training time isrecordedThemodel accuracy can be evaluated by comparingthe calculated RMSE VAF MAE and 1198772 performance indi-cators for training and testing data sets presented in Table 3Due to random subdivision processes the performances ofthe models for training and validation data sets are slightlydifferent As seen from Table 4 the prediction performancein terms of the predefined indices is better for the testing setthan for the training set Overall 1198772 values for the developedmodels were more than 095 at the training phase while inthe case of the conventional ANN at the testing phase 1198772was found to be 09 The high performance of the modelsfor the testing set can be considered to be an indicationof the good generalization capabilities of the models 1198772value greater than 09 indicates a very satisfactory modelperformance while 1198772 value between 08 and 09 indicatesan acceptable model Therefore all of the developed neuralnetworks are competent for ROP prediction 1198772 of 099 forboth the ELMandUSAmodels in the training phase indicateshigher accuracy compared to the ANN model (1198772 of 088)Moreover it also can be observed that the highest VAF of9452 the lowest RMSE of 276 andMAE of 765 belong to theUSAmodel indicating that theUSA basedmodel gives betterprediction performance than the other models In additionthe ELM model can also produce acceptable results whichyielded slightly fewer residual errors than the ANN modelsIn other words the deviation from the observed values ofpenetration rate predicted by ELM is less than the deviationof the ANNmodelrsquos prediction Figure 5 shows the regressionanalysis of the predicted and measured ROP by differentmodels in both training and testing phases According tothe running speed with the increase of hidden numbers inFigure 6 the obvious differences are observed concludingthat the cost time for the ELMmodel is less than the other twomodels To visualize the quality of the prediction predictedpenetration rates and residual errors by different models arecompared with themeasured ones for the overall data set andshown in Figures 7 and 8 respectively This low deviationobtained by three models also proves that ANN ELM andUSA are competent for ROP prediction Thus the ELM andUSA methods can both achieve high accuracy and maintainhigh running speeds This study shows that ELM technologyis a promising tool for ROP prediction and this work can

be incorporated into a software system that can be used indrilling optimization guidance

6 Conclusions

(1) This study investigates the feasibility of ELM and itsvariation the USA model for predicting the ROPof offshore drilling operations using recoded dataThis ELM and USA models can successfully predictROP in real time and this prediction can be usedas a reference range for drilling engineers identifyingdrilling problems and making decisions

(2) Choosing the relevant parameters is typically diffi-cult and previous literature informed input selec-tion Input rock material properties are the uniaxialcompressive strengths of the different rock typesformation drillability and abrasiveness which canbe calculated from lab cores and well logging dataMoreover the pump pressure and differential pres-sure are selected as important hydraulic parametersthat affect ROP while RPM and WOB are treated asthe equipment operational parameters In additionto these parameters drilling mud properties suchas mud viscosity and bit related parameters arealso involved in developing neural network modelsHowever for systems with data values beyond therange of this study or for a new bit it is necessary todevelop new neural networks Conversely the widerrange of input can enable the neural network modelsdeveloped to have wider applications in select cases

(3) The results of the USAmodel were compared to thoseof the classical ANN and ELM models Performanceof the models was checked by using 1198772 MAE RMSEand VAF performance indicators According to theperformance indicators all of the neural networksare competent for ROP prediction but the predictionperformances of the USA model were found to bebetter than those of the other twomodels with respectto accuracy while the ELMmodel had the lowest run-ning speed Compared to conventional ANNmodelsthe result of this work shows the potential of some fastand flexible neural network approaches to model theROP with high accuracy while maintaining runningspeed Therefore drilling engineers can make better

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

8 Mathematical Problems in Engineering

Bit size

Bit type

UCS

Rate of penetration

Formation drillability

Pump pressure

Formation abrasiveness

Rotational per speed

WOB

Mud type

Mud viscosity

middotmiddotmiddot

middotmiddotmiddot

Figure 3 Schematic of architecture of the ELM network model Node number of the input layer is 10 and ROP are the only node at theoutput layer

coefficient (1198772) mean absolute error (MAE) root meansquare error (RMSE) and variance accounted for (VAF)performance indicators1198772 represents the proportion of the overall varianceexplained by the model which can be calculated as

1198772 = 1 minus sum119899119894=1 (119891 (119909119894) minus 119910119894)2sum119899119894=1 119891 (119909119894)2 minus sum119899119894=1 (119910119894)2 119899 (12)

These performance indicators can give a sense of howgood the performance of a predictive model is relative to theactual value

MSE can be described as

MSE = 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (13)

The RMSE is conventionally used as an error functionfor quality monitoring of the model Model performanceincreases as RMSE decreases RMSE can be calculated by thefollowing equation

RMSE = radic 1119899119899sum119894=1

(119891 (119909119894) minus 119910119894)2 (14)

VAF is often used to evaluate the correctness of a modelby comparing the measured values with the estimated outputof the model VAF is computed by the following equation

VAF = (1 minus var [119891 (119909119894) minus 119910119894]var119891 (119909119894) ) times 100 (15)

where for (12)ndash(15)119910119894 is themeasured data and119891(119909119894)denotesthe predicted data 119909119894 119909119899 are the input parameters and 119899is the total data number

43 Model Parameters Selection As for the ANN ELM orUSA models the kernel function and the number of hiddennodes are critical parameters for neural network accuracy

The accuracy with different hidden layers was comparedin terms of RMSE performance indicators When the kernelfunction is determined then increase the hidden numbersincrementally until 100 is reached Figure 4(a) illustratesthe accuracy comparison with different nodes for the ANNELM and USA models respectively As for the ELM modelthe MSE between prediction results and measured valuesdecreases as the node number of the hidden layer graduallyincreases with more than 65 nodes As for the USA modelthe MSE between prediction results and measured valuesalso decreases as the node number of the hidden layergradually increases except for individual volatility points Asfor the ANNmodel the MSE between prediction results andmeasured values was observed in a fluctuant trend as thenode number of the hidden layer gradually increases whichindicates that the choice of hidden nodes can greatly affectfinal accuracy

Four types of kernels have been tested using a trainingdata set for the ELM model and they are the sigmoid func-tion radial-basis function hardlim function and triangularfunction Figure 4(b) shows the accuracy comparison withdifferent kernel functions for the ELM model Among thefour kernels for the ELM model the sigmoid-based modelcomes to the threshold first when the node number is set to40 while an overfitting problem appears as the node numberexceeds 60The hardlim-based triangular-based and radial-basis based models display a similar trend However itseems that the node number might exceed 60 when theminimum errors are close to the threshold for the hardlim-based model For the triangular-based model the RMSEreaches the lowest point at 311when the node number is setas 85 while the RMSE reaches the lowest point at 312 whenthe node number is set as 95 for radial-basis based modelsFigure 4(c) shows the accuracy comparison with differentkernel functions for the USAmodel Observe that the radial-basis basedmodel comes to the threshold first when the nodenumber is set as 65 As for the hardlim-based model it seemsthat the node number might exceed 60 when the minimum

Mathematical Problems in Engineering 9

ANN modelELM model

USA model

20 40 60 80 1000Number of hidden layers

0

5

10

15

20

25

30RM

SE

(a)

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

SigHardlim

RadbasTribas

(b)

SigHardlim

RadbasTribas

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

(c)

Figure 4 (a) The RMSE change with the increase of hidden layers (b) comparison of RMSE with different kernel functions for the ELMnetwork model (c) comparison of RMSE with different kernel functions for the USA network model

errors are close to the threshold which is similar to theELMmodelThe triangular-based and sigmoid-basedmodelsdisplay a similar trend the RMSE reaches the lowest pointat 311 when the node number is set as 80 for the sigmoidmodel while the RMSE reaches the lowest point at 348when the node number is set as 94 for the tribasis basedmodels

For comparison the classical ANN model was also usedin this paper and the sigmoid kernel is selected in theANN model The applied ANN architecture is a feedforwardneural network which is a network structure in whichthe information will propagate in one direction from input

to output A back-propagation algorithm with Levenberg-Marquardt training function has been used for trainingThis algorithm can approximate any nonlinear continuousfunction to an arbitrary accuracy Based on the experimentaltests and when the node number of the hidden layer is set as36 the ANN network model can obtain the best predictionaccuracy

5 Results and Discussion

When the parameters for the neural networkmodel are finallysettled the next step is to validate the model using a testing

10 Mathematical Problems in Engineering

Table 4 The calculated performance indicators for ANN ELM and USA model

Data set Performance indicator ANN ELM USA

Training

RMSE 151 095 078MAE 75 42 21VAF 9688 100 1001198772 091 093 098

Testing

RMSE 356 311 276MAE 127 97 765VAF 9188 9282 94521198772 090 092 094

data set In this section prediction performance is madebetween ANN ELM and USAmodels while training time isrecordedThemodel accuracy can be evaluated by comparingthe calculated RMSE VAF MAE and 1198772 performance indi-cators for training and testing data sets presented in Table 3Due to random subdivision processes the performances ofthe models for training and validation data sets are slightlydifferent As seen from Table 4 the prediction performancein terms of the predefined indices is better for the testing setthan for the training set Overall 1198772 values for the developedmodels were more than 095 at the training phase while inthe case of the conventional ANN at the testing phase 1198772was found to be 09 The high performance of the modelsfor the testing set can be considered to be an indicationof the good generalization capabilities of the models 1198772value greater than 09 indicates a very satisfactory modelperformance while 1198772 value between 08 and 09 indicatesan acceptable model Therefore all of the developed neuralnetworks are competent for ROP prediction 1198772 of 099 forboth the ELMandUSAmodels in the training phase indicateshigher accuracy compared to the ANN model (1198772 of 088)Moreover it also can be observed that the highest VAF of9452 the lowest RMSE of 276 andMAE of 765 belong to theUSAmodel indicating that theUSA basedmodel gives betterprediction performance than the other models In additionthe ELM model can also produce acceptable results whichyielded slightly fewer residual errors than the ANN modelsIn other words the deviation from the observed values ofpenetration rate predicted by ELM is less than the deviationof the ANNmodelrsquos prediction Figure 5 shows the regressionanalysis of the predicted and measured ROP by differentmodels in both training and testing phases According tothe running speed with the increase of hidden numbers inFigure 6 the obvious differences are observed concludingthat the cost time for the ELMmodel is less than the other twomodels To visualize the quality of the prediction predictedpenetration rates and residual errors by different models arecompared with themeasured ones for the overall data set andshown in Figures 7 and 8 respectively This low deviationobtained by three models also proves that ANN ELM andUSA are competent for ROP prediction Thus the ELM andUSA methods can both achieve high accuracy and maintainhigh running speeds This study shows that ELM technologyis a promising tool for ROP prediction and this work can

be incorporated into a software system that can be used indrilling optimization guidance

6 Conclusions

(1) This study investigates the feasibility of ELM and itsvariation the USA model for predicting the ROPof offshore drilling operations using recoded dataThis ELM and USA models can successfully predictROP in real time and this prediction can be usedas a reference range for drilling engineers identifyingdrilling problems and making decisions

(2) Choosing the relevant parameters is typically diffi-cult and previous literature informed input selec-tion Input rock material properties are the uniaxialcompressive strengths of the different rock typesformation drillability and abrasiveness which canbe calculated from lab cores and well logging dataMoreover the pump pressure and differential pres-sure are selected as important hydraulic parametersthat affect ROP while RPM and WOB are treated asthe equipment operational parameters In additionto these parameters drilling mud properties suchas mud viscosity and bit related parameters arealso involved in developing neural network modelsHowever for systems with data values beyond therange of this study or for a new bit it is necessary todevelop new neural networks Conversely the widerrange of input can enable the neural network modelsdeveloped to have wider applications in select cases

(3) The results of the USAmodel were compared to thoseof the classical ANN and ELM models Performanceof the models was checked by using 1198772 MAE RMSEand VAF performance indicators According to theperformance indicators all of the neural networksare competent for ROP prediction but the predictionperformances of the USA model were found to bebetter than those of the other twomodels with respectto accuracy while the ELMmodel had the lowest run-ning speed Compared to conventional ANNmodelsthe result of this work shows the potential of some fastand flexible neural network approaches to model theROP with high accuracy while maintaining runningspeed Therefore drilling engineers can make better

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 9

ANN modelELM model

USA model

20 40 60 80 1000Number of hidden layers

0

5

10

15

20

25

30RM

SE

(a)

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

SigHardlim

RadbasTribas

(b)

SigHardlim

RadbasTribas

20 40 60 80 1000Number of hidden layers

0

3

6

9

12

15

RMSE

(c)

Figure 4 (a) The RMSE change with the increase of hidden layers (b) comparison of RMSE with different kernel functions for the ELMnetwork model (c) comparison of RMSE with different kernel functions for the USA network model

errors are close to the threshold which is similar to theELMmodelThe triangular-based and sigmoid-basedmodelsdisplay a similar trend the RMSE reaches the lowest pointat 311 when the node number is set as 80 for the sigmoidmodel while the RMSE reaches the lowest point at 348when the node number is set as 94 for the tribasis basedmodels

For comparison the classical ANN model was also usedin this paper and the sigmoid kernel is selected in theANN model The applied ANN architecture is a feedforwardneural network which is a network structure in whichthe information will propagate in one direction from input

to output A back-propagation algorithm with Levenberg-Marquardt training function has been used for trainingThis algorithm can approximate any nonlinear continuousfunction to an arbitrary accuracy Based on the experimentaltests and when the node number of the hidden layer is set as36 the ANN network model can obtain the best predictionaccuracy

5 Results and Discussion

When the parameters for the neural networkmodel are finallysettled the next step is to validate the model using a testing

10 Mathematical Problems in Engineering

Table 4 The calculated performance indicators for ANN ELM and USA model

Data set Performance indicator ANN ELM USA

Training

RMSE 151 095 078MAE 75 42 21VAF 9688 100 1001198772 091 093 098

Testing

RMSE 356 311 276MAE 127 97 765VAF 9188 9282 94521198772 090 092 094

data set In this section prediction performance is madebetween ANN ELM and USAmodels while training time isrecordedThemodel accuracy can be evaluated by comparingthe calculated RMSE VAF MAE and 1198772 performance indi-cators for training and testing data sets presented in Table 3Due to random subdivision processes the performances ofthe models for training and validation data sets are slightlydifferent As seen from Table 4 the prediction performancein terms of the predefined indices is better for the testing setthan for the training set Overall 1198772 values for the developedmodels were more than 095 at the training phase while inthe case of the conventional ANN at the testing phase 1198772was found to be 09 The high performance of the modelsfor the testing set can be considered to be an indicationof the good generalization capabilities of the models 1198772value greater than 09 indicates a very satisfactory modelperformance while 1198772 value between 08 and 09 indicatesan acceptable model Therefore all of the developed neuralnetworks are competent for ROP prediction 1198772 of 099 forboth the ELMandUSAmodels in the training phase indicateshigher accuracy compared to the ANN model (1198772 of 088)Moreover it also can be observed that the highest VAF of9452 the lowest RMSE of 276 andMAE of 765 belong to theUSAmodel indicating that theUSA basedmodel gives betterprediction performance than the other models In additionthe ELM model can also produce acceptable results whichyielded slightly fewer residual errors than the ANN modelsIn other words the deviation from the observed values ofpenetration rate predicted by ELM is less than the deviationof the ANNmodelrsquos prediction Figure 5 shows the regressionanalysis of the predicted and measured ROP by differentmodels in both training and testing phases According tothe running speed with the increase of hidden numbers inFigure 6 the obvious differences are observed concludingthat the cost time for the ELMmodel is less than the other twomodels To visualize the quality of the prediction predictedpenetration rates and residual errors by different models arecompared with themeasured ones for the overall data set andshown in Figures 7 and 8 respectively This low deviationobtained by three models also proves that ANN ELM andUSA are competent for ROP prediction Thus the ELM andUSA methods can both achieve high accuracy and maintainhigh running speeds This study shows that ELM technologyis a promising tool for ROP prediction and this work can

be incorporated into a software system that can be used indrilling optimization guidance

6 Conclusions

(1) This study investigates the feasibility of ELM and itsvariation the USA model for predicting the ROPof offshore drilling operations using recoded dataThis ELM and USA models can successfully predictROP in real time and this prediction can be usedas a reference range for drilling engineers identifyingdrilling problems and making decisions

(2) Choosing the relevant parameters is typically diffi-cult and previous literature informed input selec-tion Input rock material properties are the uniaxialcompressive strengths of the different rock typesformation drillability and abrasiveness which canbe calculated from lab cores and well logging dataMoreover the pump pressure and differential pres-sure are selected as important hydraulic parametersthat affect ROP while RPM and WOB are treated asthe equipment operational parameters In additionto these parameters drilling mud properties suchas mud viscosity and bit related parameters arealso involved in developing neural network modelsHowever for systems with data values beyond therange of this study or for a new bit it is necessary todevelop new neural networks Conversely the widerrange of input can enable the neural network modelsdeveloped to have wider applications in select cases

(3) The results of the USAmodel were compared to thoseof the classical ANN and ELM models Performanceof the models was checked by using 1198772 MAE RMSEand VAF performance indicators According to theperformance indicators all of the neural networksare competent for ROP prediction but the predictionperformances of the USA model were found to bebetter than those of the other twomodels with respectto accuracy while the ELMmodel had the lowest run-ning speed Compared to conventional ANNmodelsthe result of this work shows the potential of some fastand flexible neural network approaches to model theROP with high accuracy while maintaining runningspeed Therefore drilling engineers can make better

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

10 Mathematical Problems in Engineering

Table 4 The calculated performance indicators for ANN ELM and USA model

Data set Performance indicator ANN ELM USA

Training

RMSE 151 095 078MAE 75 42 21VAF 9688 100 1001198772 091 093 098

Testing

RMSE 356 311 276MAE 127 97 765VAF 9188 9282 94521198772 090 092 094

data set In this section prediction performance is madebetween ANN ELM and USAmodels while training time isrecordedThemodel accuracy can be evaluated by comparingthe calculated RMSE VAF MAE and 1198772 performance indi-cators for training and testing data sets presented in Table 3Due to random subdivision processes the performances ofthe models for training and validation data sets are slightlydifferent As seen from Table 4 the prediction performancein terms of the predefined indices is better for the testing setthan for the training set Overall 1198772 values for the developedmodels were more than 095 at the training phase while inthe case of the conventional ANN at the testing phase 1198772was found to be 09 The high performance of the modelsfor the testing set can be considered to be an indicationof the good generalization capabilities of the models 1198772value greater than 09 indicates a very satisfactory modelperformance while 1198772 value between 08 and 09 indicatesan acceptable model Therefore all of the developed neuralnetworks are competent for ROP prediction 1198772 of 099 forboth the ELMandUSAmodels in the training phase indicateshigher accuracy compared to the ANN model (1198772 of 088)Moreover it also can be observed that the highest VAF of9452 the lowest RMSE of 276 andMAE of 765 belong to theUSAmodel indicating that theUSA basedmodel gives betterprediction performance than the other models In additionthe ELM model can also produce acceptable results whichyielded slightly fewer residual errors than the ANN modelsIn other words the deviation from the observed values ofpenetration rate predicted by ELM is less than the deviationof the ANNmodelrsquos prediction Figure 5 shows the regressionanalysis of the predicted and measured ROP by differentmodels in both training and testing phases According tothe running speed with the increase of hidden numbers inFigure 6 the obvious differences are observed concludingthat the cost time for the ELMmodel is less than the other twomodels To visualize the quality of the prediction predictedpenetration rates and residual errors by different models arecompared with themeasured ones for the overall data set andshown in Figures 7 and 8 respectively This low deviationobtained by three models also proves that ANN ELM andUSA are competent for ROP prediction Thus the ELM andUSA methods can both achieve high accuracy and maintainhigh running speeds This study shows that ELM technologyis a promising tool for ROP prediction and this work can

be incorporated into a software system that can be used indrilling optimization guidance

6 Conclusions

(1) This study investigates the feasibility of ELM and itsvariation the USA model for predicting the ROPof offshore drilling operations using recoded dataThis ELM and USA models can successfully predictROP in real time and this prediction can be usedas a reference range for drilling engineers identifyingdrilling problems and making decisions

(2) Choosing the relevant parameters is typically diffi-cult and previous literature informed input selec-tion Input rock material properties are the uniaxialcompressive strengths of the different rock typesformation drillability and abrasiveness which canbe calculated from lab cores and well logging dataMoreover the pump pressure and differential pres-sure are selected as important hydraulic parametersthat affect ROP while RPM and WOB are treated asthe equipment operational parameters In additionto these parameters drilling mud properties suchas mud viscosity and bit related parameters arealso involved in developing neural network modelsHowever for systems with data values beyond therange of this study or for a new bit it is necessary todevelop new neural networks Conversely the widerrange of input can enable the neural network modelsdeveloped to have wider applications in select cases

(3) The results of the USAmodel were compared to thoseof the classical ANN and ELM models Performanceof the models was checked by using 1198772 MAE RMSEand VAF performance indicators According to theperformance indicators all of the neural networksare competent for ROP prediction but the predictionperformances of the USA model were found to bebetter than those of the other twomodels with respectto accuracy while the ELMmodel had the lowest run-ning speed Compared to conventional ANNmodelsthe result of this work shows the potential of some fastand flexible neural network approaches to model theROP with high accuracy while maintaining runningspeed Therefore drilling engineers can make better

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 11

Data points Data points

R2 = 09077

Line 1 1

R2 = 0916

Line 1 1

0

30

60

90

120

150RO

P (ft

h)

30 60 90 120 1500ROP (fth)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(a) ANN testing and training

Data points Data points

R2 = 09233 R2 = 09319

Line 1 1 Line 1 1

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150RO

P (ft

h)

(b) ELM testing and training

R2 = 09422 R2 = 09829

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

Line 1 1 Line 1 1

Data points Data points

30 60 90 120 1500ROP (fth)

0

30

60

90

120

150

ROP

(fth

)

(c) USA testing and training

Figure 5 Regression analysis of the predicted and measured ROP

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

12 Mathematical Problems in Engineering

020406080

100120140160180200

Tim

e (s)

10 20 30 40 50 60 70 80 90 100 1100Number of hidden layers

ANN modelELM model

USA model

Figure 6 Time cost with the change of number of hidden layers

ANN modelELM model

USA modelMeasured

25

50

75

100

ROP

(fth

)

8650 8700 8750 88008600Depth (ft)

Figure 7 The predicted and measured ROP along the depth withdeveloped models

ANN modelELM model

USA model

minus10

minus5

0

5

10

Resid

ual e

rror

of R

OP

Figure 8 Residual errors of predicted ROP by developed models

choices according to accuracy and computationaldemand in practical use

Competing Interests

The authors declare that there are no competing interestsregarding the publication of this paper

Acknowledgments

This work was financially supported by the National KeyBasic Research Program of China (2015CB251206) Fun-damental Research Funds for the Central Universities(nos 15CX05053A and 15CX08011A and no 15CX02009A)Qingdao Science and Technology Project (no 15-9-1-55-jch) National Natural Science Foundation of China (no61305075) the Natural Science Foundation of ShandongProvince (no ZR2013FQ004) and the Specialized ResearchFund for theDoctoral Program ofHigher Education of China(no 20130133120014)

References

[1] V P Perrin M G Wilmot and W L Alexander ldquoDrillingindexmdasha new approach to bit performance evaluationrdquo inProceedings of the SPEIADC Drilling Conference Paper SPE37595 pp 199ndash205 Amsterdam The Netherlands March 1997

[2] M Bataee and S Mohseni ldquoApplication of artificial intelligentsystems in ROP optimization a case study in Shadegan oilfieldrdquo in Proceedings of the SPEMiddle East Unconventional GasConference and Exhibition (UGAS rsquo11) pp 13ndash22 February 2011

[3] M H Bahari A Bahari F N Moharrami and M B N SistanildquoDetermining Bourgoyne and Young model coefficients usinggenetic algorithm to predict drilling raterdquo Journal of AppliedSciences vol 8 no 17 pp 3050ndash3054 2008

[4] D F HowarthW R Adamson and J R Berndt ldquoCorrelation ofmodel tunnel boring and drilling machine performances withrock propertiesrdquo International Journal of Rock Mechanics andMining Sciences vol 23 no 2 pp 171ndash175 1986

[5] S Kahraman ldquoCorrelation of TBM and drilling machineperformances with rock brittlenessrdquo Engineering Geology vol65 no 4 pp 269ndash283 2002

[6] S Akin and C Karpuz ldquoEstimating drilling parameters for dia-mond bit drilling operations using artificial neural networksrdquoInternational Journal of Geomechanics vol 8 no 1 pp 68ndash732008

[7] V Uboldi L Civolani and F Zausa ldquoRock strength measure-ments on cuttings as input data for optimizing drill bit selec-tionrdquo in Proceedings of the SPEAnnual Technical Conference andExhibition Paper SPE 56441 pp 35ndash43 Houston Tex USAOctober 1999

[8] M J Fear ldquoHow to improve rate of penetration in field opera-tionsrdquo in Proceedings of the SPEIADC Drilling ConferencePaper SPE 55050 New Orleans La USA 1998

[9] R Rommetveit K S Bjoslashrkevoll G W Halsey et al ldquoDrilltron-ics an integrated system for real-time optimization of the drill-ing processrdquo in Proceedings of the IADCSPE Drilling Confer-ence vol 87124 pp 275ndash282 SPE Dallas Tex USA March2004

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Mathematical Problems in Engineering 13

[10] A H Paiaman M K Ghassem Al-Askari B Salmani B D Al-Anazi andMMasihi ldquoEffect of drilling fluid properties on rateof Penetrationrdquo NAFTA vol 60 no 3 pp 129ndash134 2009

[11] A Bahari and A B Seyed ldquoTrust region approach to findconstants of Bourgoyne and Young penetration rate model inKhangiran Iranian gas fieldrdquo in Proceedings of the SPE LatinAmerican and Caribean Petroleum Engineering ConferencePaper SPE 107520 Buenos Aires Argentina 2007

[12] H I Bilgesu L T Tetrick U Altmis Sh Mohaghegh andS Ameri ldquoA new approach for the prediction of penetration(ROP) valuesrdquo in Proceedings of the SPE Eastern RegionalMeeting Paper SPE 39231 Loaington Kentucky 1997

[13] R Jahanbakhshi R Keshavarzi and A Jafarnezhad ldquoReal-timeprediction of rate of penetration during drilling operation in oiland gas wellsrdquo in Proceedings of the 46th US Rock MechanicsGeomechanics Symposium pp 2390ndash2396 Chicago Ill USAJune 2012

[14] S Yilmaz C Demircioglu and S Akin ldquoApplication of artificialneural networks to optimum bit selectionrdquo Computers andGeosciences vol 28 no 2 pp 261ndash269 2002

[15] H Rahimzadeh M Mostofi A Hashemi and K SalahshoorldquoComparison of the penetration rate models using field data forone of the gas fields in Persian Gulf Areardquo in Proceedings of theInternational Oil and Gas Conference and Exhibition Paper SPE131253 Beijing China 2010

[16] G-B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011

[17] G-B Huang H Zhou X Ding and R Zhang ldquoExtremelearning machine for regression and multiclass classificationrdquoIEEE Transactions on Systems Man and Cybernetics Part BCybernetics vol 42 no 2 pp 513ndash529 2012

[18] A Mozaffari and N L Azad ldquoOptimally pruned extreme learn-ing machine with ensemble of regularization techniques andnegative correlation penalty applied to automotive engine cold-start hydrocarbon emission identificationrdquo Neurocomputingvol 131 pp 143ndash156 2014

[19] R Moreno F Corona A Lendasse M Grana and L SGalvao ldquoExtreme learning machines for soybean classificationin remote sensing hyperspectral imagesrdquo Neurocomputing vol128 pp 207ndash216 2014

[20] D Yu and D Li ldquoEfficient and effective algorithms for trainingsingle-hidden-layer neural networksrdquo Pattern Recognition Let-ters vol 33 no 5 pp 554ndash558 2012

[21] C Gokceoglu and K Zorlu ldquoA fuzzy model to predict the uni-axial compressive strength and the modulus of elasticity of aproblematic rockrdquo Engineering Applications of Artificial Intelli-gence vol 17 no 1 pp 61ndash72 2004

[22] J Zhang J Li Y Hu and J Zhou ldquoThe identification methodof igneous rock lithology based on data mining technologyrdquoApplied Mechanics and Materials vol 466-467 pp 65ndash69 2012

[23] J Cao J Yang YWang DWang and Y Shi ldquoExtreme learningmachine for reservoir parameter estimation in heterogeneoussandstone reservoirrdquo Mathematical Problems in Engineeringvol 2015 Article ID 287816 10 pages 2015

[24] E Khamehchi I R Kivi and M Akbari ldquoA novel approach tosand production prediction using artificial intelligencerdquo Journalof Petroleum Science and Engineering vol 123 pp 147ndash154 2014

[25] K Amar and A Ibrahim ldquoRate of penetration prediction andoptimization using advances in artificial neural networks acomparative studyrdquo in Proceedings of the 4th International Joint

Conference on Computational Intelligence (IJCCI rsquo12) pp 647ndash652 Barcelona Spain October 2012

[26] D Moran H Ibrahim A Purwanto and J Osmond ldquoSophis-ticated ROP prediction technology based on neural networkdelivers accurate resultsrdquo in Proceedings of the IADCSPE AsiaPacific Drilling Technology Conference and Exhibition SPE132010 pp 100ndash108 Ho Chi Minh City Vietnam November2010

[27] M Monazami A Hashemi and M Shahbazian ldquoDrilling rateof penetration prediction using artificial nerual network a casestudy of one of Iranian southern oil fieldsrdquo Journal of Oil andGas Business no 6 pp 21ndash31 2012

[28] G-B Huang Q-Y Zhu and C-K Siew ldquoExtreme learningmachine theory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006

[29] X Liu Y Chang H Lu et al ldquoOptimizing fracture stimulationin low-permeability oil reservoirs in the ordos basinrdquo inProceedings of the SPE Asia Pacific Oil and Gas Conference andExhibition Paper SPE158562 Perth Australia 2012

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of