Supercharge performance using GPUs in the cloud

Download Supercharge performance using GPUs in the cloud

Post on 11-Apr-2017

98 views

Category:

Technology

1 download

TRANSCRIPT

Supercharge performance using GPUs in the CloudJohn BarrusGPU Product Manager#NABShowAgenda Why GPUs? GPUs for Google Compute Engine No more HWOps! Provision a GPU instance Looking ahead: Remote Workstations for animation and production#NABShowLinear AlgebraExample calculation: bn = a11* x1 + a12 * x2 + + a1n * xnMultiply each ai,j * xj in paralleln2 parallel threads.To calculate bj you must gather n resultsn parallel threads.=b1b2bnx1x2xna1na2nanna11a21an1a12a22an2#NABShowCPU vs. GPUIntel Xeon Processor E7-8890 v4 CPUNVIDIA K80 GPU(per GPU)AMD S9300x2 GPU(per GPU) NVIDIA P100 GPUCores 24(48 threads)2496stream processors4096stream processors3584stream processorsMemory Bandwidth 85 GBps 240 GBps 512 GBps 732 GBpsFrequency (boost) 2.2 (3.4) GHz 562 MHz (875 MHz) 850 MHz 1.13 (1.30) GHzOther FP16 support for machine learningFP16 support for machine learning#NABShowExample CPU vs. GPUIntel Xeon Processor E7-8890 v4 CPUNVIDIA K80 GPU(per GPU)AMD S9300x2 GPU(per GPU) NVIDIA P100 GPUCores 24(48 threads)2496stream processors4096stream processors3584stream processorsMemory Bandwidth 85 GBps 240 GBps 512 GBps 732 GBpsFrequency (boost) 2.2 (3.4) GHz 562 MHz (875 MHz) 850 MHz 1.13 (1.30) GHzOther FP16 support for machine learningFP16 support for machine learning#NABShowExample CPU vs. GPUIntel Xeon Processor E7-8890 v4 CPUNVIDIA K80 GPU(per GPU)AMD S9300x2 GPU(per GPU) NVIDIA P100 GPUCores 24(48 threads)2496stream processors4096stream processors3584stream processorsMemory Bandwidth 85 GBps 240 GBps 512 GBps 732 GBpsFrequency (boost) 2.2 (3.4) GHz 562 MHz (875 MHz) 850 MHz 1.13 (1.30) GHzOther FP16 support for machine learningFP16 support for machine learning#NABShowAMBER Simulation of CRISPRAMBER 16 Pre-release, CRSPR based on PDB ID 5f9r, 336,898 atomsCPU: Dual Socket Intel E5-2680v3 12 cores, 128 GB DDR4 per node, FDR IB#NABShowGPU Computing has reached a tipping point...#NABShowComputing with GPUs Machine Learning Training and Inference- TensorFlow Frame Rendering and image composition - V-Ray by ChaosGroup Physical Simulation and Analysis (CFD, FEM, Structural Mechanics) Real-time Visual Analytics and SQL Database - MapD FFT-based 3D Protein Docking - MEGADOCK Faster than real-time 4K video transcoding - Colorfront Transkoder Open Source Video Transcoding - FFmpeg, libav Open Source Sequence Mapping/Alignment - BarraCUDA Subsurface Analysis for the Oil & Gas industry - Reverse Time Migration Risk Management and Derivatives Pricing - Computational Finance Workloads that require compute-intensive processing of massive amounts of data can benefit from the parallel architecture of the GPUhttps://www.tensorflow.org/how_tos/using_gpu/https://www.chaosgroup.com/https://www.mapd.com/https://www.ncbi.nlm.nih.gov/pubmed/23855673http://www.colorfront.com/?page=SOFTWARE&spage=Transkoderhttps://ffmpeg.org/https://libav.org/http://seqbarracuda.sourceforge.net/http://www.slb.com/services/seismic/geophysical_processing_characterization/dp/technologies/depth/prestackdepth/rtm.aspxhttp://ewh.ieee.org/conf/whpcf/#NABShowUse hundreds of K80 GPUs to ray-trace massive models in real-time in Google Cloud.V-Ray is Academy Award-winning software optimized for photorealistic rendering of imagery and animation. V-Rays ray tracing technology is used in multiple industries from Architecture to Visual Effects. V-Ray RT GPU is built to scale in the Google Cloud, offering an exponential increase in speed to benefit individual artists and designers, as well as the largest studios and firms.V-Ray by Chaos GroupUse hundreds of K80 GPUs to ray-trace massive models in real-time in Google Cloud.V-Ray is Academy Award-winning software optimized for photorealistic rendering of imagery and animation. V-Rays ray tracing technology is used in multiple industries from Architecture to Visual Effects. V-Ray RT GPU is built to scale in the Google Cloud, offering an exponential increase in speed to benefit individual artists and designers, as well as the largest studios and firms.#NABShow"Scalabilityinherent in modern V-Ray GPU raytrace rendering on NVIDIA K80s and in conjunction with cloud rendering on GCPenables real time interaction with complex photorealistic scenes. It's on GCP where I've seen the dawn of this ideal creative workflow which will certainly have tremendous benefits to the filmmaking community in years to come." Kevin Margo, Director, blur studio#NABShowReal-time visual analytics - MapDUsing the parallel processing power of GPUs, MapD has crafted a SQL database and visual analytics layer capable of querying and rendering billions of rows with millisecond latencyhttps://www.mapd.com/#NABShowSoftware optimized for the fastest hardwareMapD Core MapD ImmerseAn in-memory, relational, column store database powered by GPUsA visual analytics engine that leverages the speed + rendering capabilities of MapD Core+100x Faster Queries Speed of Thought Visualization#NABShowGPUs on GCPOn Feb 21st, Google Cloud Platform introduced K80 GPUs in the US, Europe and Asia. NVIDIA Tesla K80s AMD FirePro S9300 x2 NVIDIA Tesla P100#NABShow GCP offers teraflops of performance per instance by attaching GPUs to virtual machines Machine learning, engineering simulations, and molecular modeling will take hours instead of days on AMD FirePro and NVIDIA Tesla GPUs Regardless of the size and scale of your workload, GCP will provide you with the perfect GPU for your job Scientists, artists, and engineers who run compute-intensive jobs require access to massively parallel computationAccelerated cloud computing Up to 8 GPUs per Virtual Machine On any VM shape with at least 1 vCPU, you can attach1, 2, 4 or 8 GPUs along with up to 3 TB of Local SSD.GPUs are now available in 4 regions, including us-west1#NABShowFeaturesBare metal PerformanceAttach GPUs to Any Machine TypeFlexible GPU Counts Per Instance GPUs are offered in passthrough mode to provide bare metal performance Attach up to 8 GPU dies to your instance to get the power that you need for your applications You can mix-match different GCP compute resources, such as vCPUs, memory, local SSD, GPUs and persistent disk, to suit the need of your workloads#NABShowWhy GPUs in the Cloud? GPUs in the Cloud Optimize Time and CostSpeed Up Complex Compute Jobs Offers the breadth of GPU capability for speeding up compute-intensive jobs in the Cloud as well as for the best interactive graphics experience with remote workstations No capital investment Custom machine types: Configure an instance with exactly the number of CPUs, GPUs, memory and local SSD that you need for your workload Thanks to per minute pricing, you can choose the GPU that best suits your needs and pay only for what you use#NABShowK80 Pricing (Beta)Location SKU On demand priceGPU / hour (USD)US GpuNvidiaTeslaK80 $0.700Europe GpuNvidiaTeslaK80_Eu $0.770Asia GpuNvidiaTeslaK80_Apac $0.770billed in per minute increments with 10 minute minimum2 GPUs per board, up to 4 boards / 8 GPUs per VM#NABShowCloud GPUs - no need to worry about......system research...upfront hardware purchase and shipping...physical space and racks...assembly and test...hardware failures and debugging...power and cooling #NABShowProvision a GPU instance using the consolehttps://console.cloud.google.com/https://console.cloud.google.com/https://console.cloud.google.com/#NABShowChoose Customize#NABShowClick on GPUs#NABShowChoose the number of GPUs desired#NABShowPress Create#NABShowgcloud beta compute instances create gpu-instance-1 \ --machine-type n1-standard-16 \ --zone asia-east1-a \ --accelerator type=nvidia-tesla-k80,count=2 \ --image-family ubuntu-1604-lts \ --image-project ubuntu-os-cloud \ --maintenance-policy TERMINATE \ --restart-on-failure \ --metadata startup-script='#!/bin/bash echo "Checking for CUDA and installing." # Check for CUDA and try to install. if ! dpkg-query -W cuda; then curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb dpkg -i ./cuda-repo-ubuntu1604_8.0.61-1_amd64.deb apt-get update apt-get install cuda -y fi'Provisioning a GPU instance#NABShowHigh performance GPUs do not support Live MigrationGPUs offered in high-performance pass-through modeVM owns the entire GPUIts not possible to migrate the state and contents of the GPU chip and memory.VMs attached to GPUs must be set to terminateOnHostMaintenanceOne hour notice is provided for the system to checkpoint and save state to be restored.#NABShowVM metadata provides noticeReturns either NONE or a timestamp at which time your instance will be forcefully terminated.See: https://cloud.google.com/compute/docs/gpus/add-gpus#host-maintenancecurl \ http://metadata.google.internal/computeMetadata/\v1/instance/maintenance-event \-H "Metadata-Flavor: Google"https://cloud.google.com/compute/docs/gpus/add-gpus#host-maintenancehttps://cloud.google.com/compute/docs/gpus/add-gpus#host-maintenancehttps://cloud.google.com/compute/docs/gpus/add-gpus#host-maintenancehttps://cloud.google.com/compute/docs/gpus/add-gpus#host-maintenance#NABShowTensorFlow Supervisorhttps://www.tensorflow.org/programmers_guide/supervisor Handles shutdowns and crashes cleanly. Can be resumed after a shutdown or a crash. Can be monitored through TensorBoard.https://www.tensorflow.org/programmers_guide/supervisorhttps://www.tensorflow.org/programmers_guide/supervisorhttps://www.tensorflow.org/programmers_guide/supervisor#NABShowRendering on a GPU farm in the CloudAdrian GrahamCloud Solutions Architect#NABShow Remote workstation with sufficient CPU, GPU and memory. Project-based cloud storage. Interactive and render licenses served on cloud, or from on-premises. Color-accurate display capability. As many render workers as possible.Render pipeline requirements#NABShowConstruct by Kevin Margohttp://www.youtube.com/watch?v=8JItUtHwKiEhttp://www.youtube.com/watch?v=ihyRybQmmWc#NABShowDemo videohttps://youtu.be/inD9YgtEPW0?t=33#NABShowInstance GroupRender InstanceCompute EngineMultiple InstancesArchitecture: Using display and compute GPUsOn-premise infrastructureAsset Management DatabaseAPIs: gcloud, gsutil, ssh, rsync, etcFile Server Zero ClientAssetsCloud StorageUsersCloud IAMUsers & AdminsUsers & AdminsCloud Directory SyncRemote DesktopCompute EngineLicense ServerCompute Engine Teradici PCoIP#NABShowCreating a workstationFor this job, we needed to run project-specific software (Autodesk 3DS Max) that only runs on Windows.# Create a workstation.gcloud compute instances create "remote-work" \--zone "us-central1-a" \--machine-type "n1-standard-32" \--accelerator [type=,count=1] \--can-ip-forward --maintenance-policy "TERMINATE" \--tags "https-server" \--image "windows-server-2008-r2-dc-v20170214" \--image-project "windows-cloud" \--boot-disk-size 250 \--no-boot-disk-auto-delete \--boot-disk-type "pd-ssd" \--boot-disk-device-name "remote-work-boot"2311 Choose from zones in us-east1, us-west1, europe-west1, and asia-east1.2 Choose type and number of attached GPUs.3 Attach a GPU to an instance with any public image.#NABShowCreating a render workerWe'll be interacting with Windows, but our render workers will be running CentOS 7. Here, we build a base image to deploy.# Create a render worker.gcloud compute instances create "vray-render-base" \--zone "us-central1-a" \--machine-type "n1-standard-32" \--accelerator type="nvidia-tesla-k80",count=4 \--maintenance-policy "TERMINATE" \--image "centos-7-v20170227" \--boot-disk-size 100 \--no-boot-disk-auto-delete \--boot-disk-type "pd-ssd" \--boot-disk-device-name "vray-render-base-boot"121 We will keep the render workers in the same zone for maximum throughput.2 Once the instance is set up to our liking, we will delete the instance, leaving the disk.#NABShowDeploying an instance groupOnce we have our base Linux image, we create an instance template which we can deploy as part of a managed instance group.# Create the image.gcloud compute images create "vrayrt-cent7-boot" \--source-disk "vray-render-base-boot" \--source-disk-zone "us-central1-a"# Create the template.gcloud compute instance-templates create \"vray-render-template" \--image "vrayrt-cent7-boot" \--machine-type "n1-standard-32" \--accelerator type="nvidia-tesla-k80",count=4 \--maintenance-policy "TERMINATE" \--boot-disk-size 100 \--boot-disk-type "pd-ssd" \--restart-on-failure \--metadata startup-script='#! /bin/bashrunuser -l adriangraham -c "/usr/ChaosGroup/V-Ray/Standalone_for_linux_x64/bin/linux_x64/gcc-4.4/vray -server -portNumber 20207"'11 On boot, we need each worker to launch the V-Ray Server command.#NABShow# Launch a managed instance group.gcloud compute instance-groups managed create \"vray-render-grp" \--base-instance-name "vray-render" \--size 32 \--template "vray-render-template" \--zone "us-central1-a"Release the hounds!Managed instance groups can be deployed quickly, based on an instance template. This group will launch 32 instances, respecting resources such as quota, IAM role at the project and organization levels.#NABShow# Listen to output from the serial port.gcloud compute instances \tail-serial-port-output \vray-render-tk43 # name of managed instance# Reduce size of instance group.gcloud compute instance-groups managed \resize --size=16 "vray-render-grp"# Kill all instances.gcloud compute instance-groups managed \delete "vray-render-grp"Useful commandsOnce running, it's helpful to be able access the state of your instances, manage the group's size, or even deploy an updated instance template.#NABShowSummary K80 GPUs available today on Google Cloud Scale up easily and quickly S9300x2 and P100s coming soonGo go https://cloud.google.com/gpu to provision GPUs on Googles Cloud today!https://cloud.google.com/gpuThank you