Transcript
Page 1: CFD Modeling of a Multiphase Gravity Separator Vesselrepository.kaust.edu.sa/kaust/bitstream/10754/623730/1/Narayan... · CFD Modeling of a Multiphase Gravity Separator Vessel

RESEARCH POSTER PRESENTATION DESIGN © 2015

www.PosterPresentations.com

CFD Modeling of a Multiphase Gravity Separator Vessel

Gautham Narayan*, Rooh Khurram*, Ehab Elsaadawy†

King Abdullah University of Science and Technology*; Saudi Aramco R&DC†

The poster highlights a CFD study that incorporates a combined Eulerian multi-fluid multiphase and aPopulation Balance Model (PBM) to study the flow inside a typical multiphase gravity separator vessel(GSV) found in oil and gas industry. The simulations were performed using Ansys Fluent CFD packagerunning on KAUST supercomputer, Shaheen. Also, a highlight of a scalability study is presented. Theeffect of I/O bottlenecks and using Hierarchical Data Format (HDF5) for collective and independentparallel reading of case file is presented. This work is an outcome of a research collaboration on anAramco project on Shaheen.

Abstract

Geometry

Gas outlet

Oil outletWater outlet

Mass flow inlet

Perforated baffle plate

Weir

Directional Porosity

Porosity 0.3

Axial Coeff resistance 1000 m-2

Radial Coeff resistance 10000 m-2

Inlet mass flow

Oil 1.8x105 bbl / 8.4%

Water 5.4x104 bbl / 2.5%

Gas 1.9x106 bbl / 89%

GSV Dimensions

Length 45.50 m

Diameter 4.26 m

Weir height 2.00 m

Mesh Information:

• 700,000 tetrahedral elements • Maximum face size (0.1m)• Minimum Size - 0.01m• Growth Rate - 1.2

Unstructured Grid

Multi-Phase Model

• Euler - Euler multiphase model• Four phases

▪ Oil – Primary Phase▪ Water - two phases for population balance model▪ Gas

Secondary Phases

Population Balance Model (PBM)• PBM solved using the Inhomogeneous

Discrete Method (IDM)• Two water phases of 16 bins each• Default settings for aggregation and

breakage

Inlet Droplet Distribution• Log normal distribution• Mean diameter – 100 microns• Standard deviation - 33 micron

166.2

69

42.754

0

20

40

60

80

100

120

140

160

180

1 - Host Mode 2 - Node0 3 - ParallelIndependent

4 - ParallelCollective

Tim

e (s

eco

nd

s)

Fluent Reading Modes

HDF5 Case file read performance in Fluent16 Gb (140 Million 3D CFD grid) - without compression

Separator Case - Compute and I/O Performance

Burst Buffer : I/O Speedup

0

10

20

30

40

50

Test 1 Test 2 Test 3

Tim

e (s

ec)

BB Lustre

0

10

20

30

40

50

Lustre BB 1 Node BB 10 Nodes

Tim

e (s

ec)

Initial condition:

Water Layer (1.0m) Oil Layer (1.4m)

Weir height (2.0m)Gas Layer

Fluent: Patch Based Initialization

Fluent F1 Race Car Test Case 140 million cell mesh; 21GB output file

Time = 0

Time = 0.5

Time = 3

Time = 30

Vector plot : Normalized Velocity

Contour Plot : Oil Fraction

Oil outlet

Water outlet

Gas outlet

Mixed Mass Flow Inlet

Baffle Plate Oil-Water Stratification

References:

[1] Vilagines, R. D., & Akhras, A. R. Three-Phase Flows Simulation for Improving Design of Gravity Separation Vessels. SPE Annual Technical Conference and Exhibition 10.2118/134090-MS[2] Manoj Kumar Vani (ANSYS), HPC Scale –up test on Shaheen: Gravity Separator, 2016

Future Work:

• Fully understand flow structures around momentum breaker.• Conduct parametric studies on breaker and weir.• Setup intuitive graphical workflow for design optimization.• IO improvement collaborative work with ANSYS

Conclusions:

GSV model is developed and preliminary results are obtained for oil separation. The overall frameworkis scalable on Shaheen. For large core counts, IO becomes a bottleneck. Various read/write optionsand burst buffer technology helped in identifying optimal parameters for faster IO rate. The resultsshow that a significant improvement (~4x) in I/O performance can be achieved using HDF5 fileformats for large cases in Fluent. Additional 30% improvement of IO can be obtained by using burstbuffer on Shaheen. Further IO improvement request will be sent to ANSYS.

The compute part scales very well on Shaheen. The I/O performance numbers are using HDF5(Hierarchical Data Format) read/write modes in Fluent v17. The Independent mode offers highestspeedup but the IO does not scale as we increase core count. For large core counts, I/O becomes abottleneck. KSL is discussing a possible RFE with ANSYS.

In order to speedup I/O, the performance is analysed on burst buffer, which is an SSD based filesystem that is integrated within the Aires network on Shaheen. 20-30% IO improvement is observed.The results are repeatable (figure on left). Oversubscription of burst buffer nodes showed modestscalability in IO (figure on right). During our tests, we discovered (courtesy to KSL scientist – GeorgiosMarkomanolis) that HDF based parallel IO does not work on burst buffer. This issue will also bediscussed with ANSYS.

Multiphase Flow Results

Computational Bottleneck

Average wall-clock time per iteration: 4.407 sec Global reductions per iteration: 196 ops Global reductions time per iteration: 0.000 sec (0.0%) Message count per iteration: 7462618 messages Data transfer per iteration: 66972.895 MB LE solves per iteration: 25 solves LE wall-clock time per iteration: 0.784 sec (17.8%) LE global solves per iteration: 2 solves LE global wall-clock time per iteration: 0.012 sec (0.3%) Total wall-clock time: 661.027 sec

Compute Performance Timer :

Average wall-clock time per iteration: 0.589 sec Global reductions per iteration: 196 ops Global reductions time per iteration: 0.000 sec (0.0%) Message count per iteration: 70030690 messages Data transfer per iteration: 155328.436 MB LE solves per iteration: 25 solves LE wall-clock time per iteration: 0.144 sec (24.4%) LE global solves per iteration: 2 solves LE global wall-clock time per iteration: 0.021 sec (3.5%) Total wall-clock time: 88.359 sec

150 iterations on 2048 compute cores

Scalability: ANSYS Benchmarks

Cavity flow 0.5M nodes F1 Racecar 140M nodesSedan 4M nodes

150 iterations on 16384 compute cores

Top Related