cuda implementation of matrix-free finite element operators · •in all examples, the cuda backend...

1
CUDA Implementation of Matrix-Free Finite Element Operators Steven Roberts 1 and Jean-Sylvain Camier 2 Abstract: libCEED is an open-source library developed by the Center for Efficient Exascale Discretizations (CEED) for the creation and evaluation of finite element operators. Instead of using a sparse matrix to represent these operators, it uses a technique called partial assembly to perform the evaluations on the fly, which is often significantly more efficient. In order for libCEED to support future exascale simulations, support for a wide range of computing platforms is needed. This summer, I developed a CUDA backend for libCEED which runs on NVIDIA GPUs. The project involved adapting the existing framework to better accommodate GPU computing and optimizing kernels for finite element operations. The new CUDA backend shows a significant speedup over other backends. libCEED Goal : Create a lightweight C library for finite element operations that can be used at exascale Motivation : Partial differential equations are the mathematical backbone to many physical simulations, and the finite element method is an effective and increasingly popular method for solving them Techniques : Using improved representations −∇ # $ + &$ = ( ⟹ * + $ + + &, + $ + =( + Sparse matrices are a suboptimal method of representing finite element operators Use partial assembly to algebraically decompose operators Optimal memory and near-optimal floating point operations 0.1441 0.0981 0.9308 1.0663 0.6468 4.4726 15.8842 3.0971 11.4402 0 2 4 6 8 10 12 14 16 18 Order 2 Order 4 Order 8 Time (s) to perform L 2 projection CUDA CPU OCCA (CUDA Mode) Background 1. Virginia Tech Department of Computer Science 2. Lawrence Livermore National Lab Center for Applied Scientific Computer Methodology Project Goal : Develop an optimized CUDA backend for libCEED Motivation : Operations are easy to parallelize since each element in a mesh can be processed independently Linear algebra operations (tensor contractions, inner products, etc.) are well-suited to GPUs More specialized optimizations than OCCA and Magma backends which also support GPUs Most interfaces force serial execution Many expensive memory transfers to and from GPU required Poor GPU utilization and performance Directly convert CPU backend into CUDA backend Better parallelism and fewer memory transfers Basis kernel bottlenecked by global memory access Update interface to support “vectorization” Remove unnecessary memory transfers Utilizing shared memory to reduce global memory access Improve memory accessing • Find and fix bugs • Ensure accuracy • Use examples for profiling Update tests and examples Results Performance tests were run on an NVIDIA Tesla P100-SXM2-16GB on Ray In all examples, the CUDA backend significantly outperforms all other backends and scales well at high orders Performance improvements would increase further if linear solves could be completed on GPU Profiling reveals high compute utilization but room for improvement in terms of memory bandwidth efficiency Acknowledgements This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. 6.44 4.51 5.42 6.37 0 1 2 3 4 5 6 7 Order 2 Order 4 Order 6 Order 8 Speedup over reference backend solving Poisson equation I would also like to thank Tzanio Kolev, Jeremy Thompson, Jed Brown, and the other developers of libCEED for their guidance in creating the CUDA backend and integrating it into the project. libCEED Backends Supported Hardware Reference CPUs OCCA CPUs and GPUs Magma CPUs and GPUs CUDA NVIDIA GPUs The code is available at github.com/CEED/libCEED See ceed.exascaleproject.org/libceed for more information Using libCEED LLNL-POST-755562

Upload: others

Post on 23-Mar-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CUDA Implementation of Matrix-Free Finite Element Operators · •In all examples, the CUDA backend significantly outperforms all other backends and scales well at high orders •Performance

CUDA Implementation of Matrix-Free Finite Element Operators

Steven Roberts1 and Jean-Sylvain Camier2

Abstract: libCEED is an open-source library developed by the Center for Efficient Exascale Discretizations (CEED) for the creation and evaluation of finiteelement operators. Instead of using a sparse matrix to represent these operators, it uses a technique called partial assembly to perform the evaluations on thefly, which is often significantly more efficient. In order for libCEED to support future exascale simulations, support for a wide range of computing platforms isneeded. This summer, I developed a CUDA backend for libCEED which runs on NVIDIA GPUs. The project involved adapting the existing framework to betteraccommodate GPU computing and optimizing kernels for finite element operations. The new CUDA backend shows a significant speedup over other backends.

libCEED Goal: Create a lightweight C library for finite element operations that can be used at exascale

Motivation: Partial differential equations are the mathematical backbone to many physical simulations, and the finite element method is an effective and increasingly popular method for solving them

Techniques:Using improved representations

−∇#$ + &$ = ( ⟹ *+$+ + &,+$+ = (+

• Sparse matrices are a suboptimal method of representing finite element operators

• Use partial assembly to algebraically decompose operators• Optimal memory and near-optimal floating point operations

0.1441 0.09810.93081.0663 0.6468

4.4726

15.8842

3.0971

11.4402

0

2

4

6

8

10

12

14

16

18

Order 2 Order 4 Order 8

Time (s) to perform L2 projection

CUDA CPU OCCA (CUDA Mode)

Background

1. Virginia Tech Department of Computer Science2. Lawrence Livermore National Lab Center for Applied Scientific Computer

MethodologyProject Goal: Develop an optimized CUDA backend for libCEED

Motivation:• Operations are easy to parallelize since each element in a mesh can be

processed independently• Linear algebra operations (tensor contractions, inner products, etc.) are

well-suited to GPUs• More specialized optimizations than OCCA and Magma backends which

also support GPUs

• Most interfaces force serial execution• Many expensive memory transfers to and

from GPU required• Poor GPU utilization and performance

Directly convert CPU backend

into CUDA backend

• Better parallelism and fewer memory transfers• Basis kernel bottlenecked by global

memory access

Update interface to

support “vectorization”

• Remove unnecessary memory transfers• Utilizing shared memory to reduce global

memory access

Improve memory accessing

• Find and fix bugs• Ensure accuracy• Use examples for profiling

Update tests and examples

Results• Performance tests were run on an NVIDIA Tesla P100-SXM2-16GB on Ray• In all examples, the CUDA backend significantly outperforms all other

backends and scales well at high orders• Performance improvements would increase further if linear solves could

be completed on GPU• Profiling reveals high compute utilization but room for improvement in

terms of memory bandwidth efficiency

Acknowledgements

This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

6.44

4.51

5.42

6.37

0

1

2

3

4

5

6

7

Order 2 Order 4 Order 6 Order 8

Speedup over reference backend solving Poisson equation

I would also like to thank Tzanio Kolev, Jeremy Thompson, Jed Brown, and the other developers of libCEED for their guidance in creating the CUDA backend and integrating it into the project.

libCEED Backends Supported Hardware✓ Reference CPUs

✓ OCCA CPUs and GPUs

✓ Magma CPUs and GPUs

✘ CUDA NVIDIA GPUs

• The code is available at github.com/CEED/libCEED• See ceed.exascaleproject.org/libceed for more information

Using libCEED

LLNL-POST-755562