a reseese o s u at o u ce ta tyrvoir simulation...

55
A Reservoir Simulation Uncertainty Modelling Workflow Tool Brennan Williams Geo Visual Systems Ltd

Upload: vuongliem

Post on 10-Apr-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

A Reservoir Simulation Uncertainty ese o S u at o U ce ta ty

Modelling Workflow Tool

Brennan Williams

Geo Visual Systems Ltd

Contents

1. Introduction

2. Simulation model overview

3. What are the problems?

4. What do we need?

5. The tool

6. Case study

7. What comes next?

1. Introduction

• describe the ongoing development of a software tool for uncertainty modelling d hi t t hi i th i i l ti di i li f th il dand history matching in the reservoir simulation discipline of the oil and gas

industry.

• brief overview of simulation models over the last decade

• what are the problems? data management etc.

• What do we need? aims of the tool – run manager, uncertainty modelling, dataWhat do we need? aims of the tool run manager, uncertainty modelling, data

analysis…

• case study – uncertainty modelling exampley y g p

• what comes next? history matching, support for other simulators…

2. Simulation Model Overview

computer model used to predict the flow of fluids (typically, oil, water, and gas)through porous media

dominated by finite difference simulators – Eclipse, VIP

some finite element and streamline simulators

Output • grid geometry• initial grid property data – one value per cell in the grid – porosity,permeability etc• recurrent grid property data – one value per cell in the grid for each report timestep

pressure, oil/water/gas saturations• plot vectors – e.g. production rate for each well for each plot timestep

2. Simulation Model Overview…

Then….

• built a single (incorrect) simulation model to describe the reservoir.

• manually history match this single model

• use the matched model in a series of prediction runs to compare different field

production scenarios.

Now….

• Build multiple models to gauge uncertainty

• history match multiple models

• prediction runs using multiple history matched models

2. Simulation Model Overview…

1992small model, 13000 cells, 15 year simulation, 300 plot vectors, 20MB

2. Simulation Model Overview…

1998medium model, 200,000 cells, 150 wells, 2,000 plot vectors, 70MB

2. Simulation Model Overview…

1998small model, 60,000 cells, 600 wells, 18,000 plot vectors, 127MB

2. Simulation Model Overview…

2002very large model, 3,000,000 cells, 800MB

2. Simulation Model Overview…

2003medium model with coarsening and local grid refinement120,000 cells, 200MB

2. Simulation Model Overview…

2006large model with nested LGR, 300,000+ cells, 12,000 plot vectors, 1.3GB

2. Simulation Model Overview…

2006large model, 300,000 cells, 300+ wells, 360,000 plot vectors, 1.8GB

3. What are the problems?

Single Model

• only one representation of an unlimited number of possible models that match reasonably

well the known history.

• no understanding of uncertainty in a single model

3. What are the problems? …

Multiple Models….

• Models are getting bigger

• file size issues

• runtime issues

• Data analysis

• How do we model uncertainty in as few a number of simulation runs as possible?

3. What are the problems? …

• Model size of 200,000 cells

• 40 wells

• 2,000 plot vectors

• 500MB output files per run

• 9 uncertainty variables with 3 values each (low,mid,high) = 39 or 20,000 runs,

40,000 hours (4 years) runtime @ 2 hours per run and 10 TB data

• 2*9+1=19 Tornado runs = 38 hours runtime, 10GB data

4. What do we need?

• multiple models to model uncertainty

• Engineer/user designed workflow - capture steps in the simulation study process

• select/define independent/control variables

• Choice of algorithms to use to change our control variables

• run/deck generation and submission – i.e. a run manager

• data analysis tools

• assisted history matching

• simulator independent

5. The tool - rezen

Phase 1 - Run Manager ‘open box’Phase 2 - Data Analysis

Phase 3 - Uncertainty Modelling

Phase 4 - History Matching…ongoing

5. The tool… data hierarchy… terminology

• ensemble : a set of related simulation

decks varying around a core modely g

• deck : an individual simulator dataset

or run (both input & output files)

• variable : a simulation parameter

whose uncertainty or sensitivity we

wish to investigate

• deck vector : plot vector imported from

simulation output files (e.g. FOPT)

• ensemble vector : set of related deck

vectors, one for each deck

5. The tool… run manager

• Run Manager

manage multiple reservoir simulation runs– manage multiple reservoir simulation runs

– supports Eclipse

th b i i f d k t th i l t– manage the submission of decks to the simulator queues

– provides convenience tools such as

i l i i & fil (i l di bi )• scan simulation input & output files (including binary)

• conversion tools (binary to text)

b ilt i ‘diff’ b t l t d DATA fil• built-in ‘diff’ between related .DATA files

5. The tool… uncertainty modelling

• Uncertainty Modelling

– supports the engineer/user designed uncertainty modelling workflow

– select uncertainty modelling algorithm, define ensemble variables and build

an ensemble control file ith directi esan ensemble control file with directives

– generate simulation decks and submit decks to the simulator queues

b ilt i ‘diff’ tilit b t l t d d k– built-in ‘diff’ utility between related decks

– plots of objective function vs generated ensemble variable values

5. The tool… uncertainty modelling…

• Uncertainty modelling algorithms

– Discrete cases e.g. create n runs by specifying n values of each variable.

– Combination e.g. create n1*n2*… runs by specifying ni values for the i’th

i blvariable

– Tornado method or one-at-a-time uncertainty analysis. Requires 3 discrete

values (low min high) of each variablevalues (low, min, high) of each variable.

– Monte-Carlo simulation. User can specify continuous or discrete

distributions which are randomly sampled for each run. User specifies the

number of runs to do.

– Plackett-Burman experimental design. Requires "+1" and "1" values of

each variable, but it's usually a good idea to test for curvature, so run an all-

zero case too.

5. The tool… uncertainty modelling…

Multiple Scenario Approach

• need to identify most significant reservoir uncertainties and ranges

• varying one-parameter-at-a-time (Tornado) is a starting point…

• then a reliable method for combining parameters in an efficient set of

simulations is needed (experimental design)…

• and then the ability to use statistics to gauge the impact of the results

5. The tool… data analysis

• Data Analysis

– import output vector data from simulator output files (for all decks)

– partially filter the data that is imported

– plot deck vectors (i.e. vectors for a specific deck)

– plot simulated with history (e.g. FOPT & FOPTH)

– plot ensemble vector against deck number or against ensemble variable

– simple statistics on individual vectors

5. The tool… workflow description

1. create ensemble

2. define core model

3. define variables & generate range of values

4. generate & submit decks, import output data

5. plot vectors & analyze results

• differences between ensembles and/or decks may be:

– subsurface unknowns e.g. geological realizations etc.

– development scenarios e.g. infill location, water injection start date p g , j

– numerical issues e.g. model sizes, computational parameters etc.

5. The tool… ensemble vector plot 1

5. The tool… ensemble vector plot 2

5. The tool… ensemble vector plot 3

5. The tool… ensemble vector table

5. The tool… ensemble vector plot vs deck

5. The tool… ensemble vector plot vs ensemble variable

5. The tool… ensemble variable plot

5. The tool… ensemble variable table

6. Case Study - Infill Well

• Infill well vs no infill well

• 11 different geological realisations (perm & poro distribution)

• Want to identify what model variables are most significant

• Tornado algorithm on 9 variables to reduce to 4 variables

• Combine algorithm on 4 variables

• Allocate probabilities to variable values to generate an S-curve

• Select a number of models to use in future runs

6. Case Study … Infill Well Project WorkflowTask Workflow

create base

visually inspect

define tornado

run hmatch

repeat for prediction case

calculate identify most base case

inspect hmatch

tornado values

hmatch cases incrementals

ysignificant variables

copy visually define run generatepybase case

yinspect hmatch

combined values

hmatch cases

repeat for prediction case

generate S-curve

repeat for prediction casedetermine

“P90-50-10”Stage 1 Identify Key Variables and Valid Cases

Stage 2 In Depth Analysis of Key Variables

6. Case Study … Infill Well Project WorkflowStage 1g

create base

visually inspect

define tornado

run hmatch

repeat for prediction case

calculate identify most base case

inspect hmatch

tornado values

hmatch cases incrementals

ysignificant variables

copy visually define run generatepybase case

yinspect hmatch

combined values

hmatch cases

repeat for prediction case

generate S-curve

repeat for prediction casedetermine

“P90-50-10”Stage 1 Identify Key Variables and Valid Cases

Stage 2 In Depth Analysis of Key Variables

6. Case Study … Setup EnsembleCreate ensemble variables and assign low-mid-high to eachg g

Variables will be used in tornado analysis to identify the “bigVariables will be used in tornado analysis to identify the big hitters”

6. Case Study … Setup EnsembleDirectives {formula 1-$residual gas}

– enclosed in curly braces

– contain one statement or statements separated by semi-colons

{ _g }

p y

– a statement is a command word optionally followed by arguments

– variables in the argument must be preceded by a $g p y

• value directive MULTIPLY

‘PERMX’ {value $highperm leman} 1 70 1 100 4 4 /PERMX {value $highperm_leman} 1 70 1 100 4 4 /

/

f l di ti MAXVALUE• formula directive MAXVALUE

‘SWL’ {formula 0.999-$residual_gas} 1 70 100 1 98/

/

6. Case Study … Setup EnsembleEdit the ensemble control file

{value $residual_gas}

insert directives – instructions for Rezen about how to use the variables

6. Case Study … Generate DecksDecks Created

E h d k dEach deck corresponds to one .data simulator input file.

6. Case Study … Setup Ensemble Vectorsimported simulation output filesp p

6. Case Study … Setup Ensemble VectorsView deck vectors

6. Case Study … Setup Ensemble VectorsView ensemble vectors

Ensemble vector is a collection of related deck vectors

Time Plot

6. Case Study … Setup Ensemble VectorsView ensemble vectors

Tornado Plot

6. Case Study … Visual History MatchIdentify cases that don’t history match FGPRy y

Left mouse button for drag graph

Right mouse button for drag legend

Press z in plot for zoom

Right mouse button for drag legend

Decks Case

D0001 aquifer size = 0

D0002 aquifer strength = 0

D0003 reservoir cont, polygons = A+B only

D0004 carboniferous leman transmissibility = 0

D0006 Facies proportion = 30:70

6. Case Study … repeat for infill wellRepeat same process as beforep p

Data check ensemble control file

Generate decks

Submit decks

Create ensemble vectors

6. Case Study … Identify most significant variables

R2 FGPT

R2P1 FGPT

These are only results from R2 In realityThese are only results from R2. In reality need to consider all 11 realisations together.

R2P1 FGPT_F – R2 FGPT_F

6. Case Study … Infill Well Project WorkflowStage 2g

create base

visually inspect

define tornado

run hmatch

repeat for prediction case

calculate identify most base case

inspect hmatch

tornado values

hmatch cases incrementals

ysignificant variables

copy visually define run generatepybase case

yinspect hmatch

combined values

hmatch cases

repeat for prediction case

generate S-curve

repeat for prediction casedetermine

“P90-50-10”Stage 1 Identify Key Variables and Valid Cases

Stage 2 In Depth Analysis of Key Variables

6. Case Study … Stage 2In depth analysis of key variablesp y y

Key variables were found to be:

Reservoir continuity res_cont

Aquifer Size aqu_size

C b if f i ti b f iCarboniferous facies proportion carb_facies

High perm streak in Leman highperm_leman (client suggestion)

Aquifer strength was also significant but it is directly related to aquifer size soAquifer strength was also significant but it is directly related to aquifer size so it was discarded from further analysis.

Full factorial analysis of the 4 variables = 3^4 = 81 cases/realisation

But, from history match in stage 1 aqu_size downside, res_cont downside and carb_facies downside can be discarded.

So full factorial analysis of the 4 variables = 2x2x2x3=24 cases/realisationSo, full factorial analysis of the 4 variables = 2x2x2x3=24 cases/realisation

6. Case Study … Stage 2Load output files and import vectorsp p

6. Case Study … Generating S-Curve

Ensemble Variable low prob mid prob high probEnsemble Variable low prob mid prob high probReservoir Continuity 1 0.30 2 0.40 3 0.30Aquifer Size 0.000001 0.25 1 0.50 2 0.25Facies proportion L 0.30 M 0.35 H 0.35High Perm Streak 2 0.20 1 0.40 1 0.40

6. Case Study … Generating S-Curve

0 9

1

0.7

0.8

0.9

bilit

y

0 4

0.5

0.6

ativ

e Pr

obab

0.2

0.3

0.4

Cum

ula

MonteCarlo (250 Runs)

0

0.1

2 3 4 5 6 7 8

ED+ Tornado (23 Runs)

Plackett-Burman ED (10 Runs)

2 3 4 5 6 7 8

Cumulative Oil per Well (MMstb)

7. What comes next?

• run manager

– support for additional simulators

– integration with load balancers

• data analysis

– s-curve generation & display

– response surface plots

– user defined objective/goodness-of-fit functions

• history matching

– user defined history match variables, ranges, objective functions

– optimisation algorithms

7. What comes next?...data analysis…response surface

7. What comes next?...data analysis … objective function

nwells ntimes

Σ wj Σ wt abs( st-ht)n

j=1 t=1

nwells ntimes

Σ Σ b ( h )nΣ wj Σ abs(wt.ht)n

j=1 t=1

7. What comes next?...history matching…

• no history match is unique

• aim is to get a model with good predictive capability

• define and implement workflow

• goodness of fit/ objective functions

• selecting history match variables & defining value ranges

• defining algorithms for adjusting the history match variables in the model to

improve the match