reviewer’s guide for remote 3d graphics apps · 2016-09-15 ·...
TRANSCRIPT
Getting Started with HDX 3D Pro
Reviewer’s Guide for Remote 3D Graphics Apps Part 4: vSphere software GPU (vSGA)
with XenDesktop 7, Nvidia GRID K1/K2 cards, Dell R720 Server
Getting Started with HDX 3D Pro
Contents Audience .................................................................................................................................................................. 1
Related Documents in this Series ............................................................................................................................ 1
About the Authors ................................................................................................................................................... 1
Lab Environment ..................................................................................................................................................... 2
VMware: vSGA – Virtual Shared Graphics Acceleration .......................................................................................... 3
VMware vSphere vSGA Configuration ..................................................................................................................... 4
Enable the Host for vSGA .................................................................................................................................... 4
vSGA installation on VMware ESXi host .............................................................................................................. 5
Pre-‐requisite -‐ Base Image for vSGA .................................................................................................................... 8
Configure vSGA advanced settings using VMware vSphere Web Client ........................................................... 10
Installation of XenDesktop 7 and Delivering GPU-‐accelerated virtual desktop .................................................... 15
Launch GPU-‐accelerated virtual desktop from Citrix Receiver ............................................................................. 16
Launch desktops and applications on Windows client ...................................................................................... 16
Summary ............................................................................................................................................................... 19
Appendix ............................................................................................................................................................... 20
ESXi commands for NVIDIA GPU ....................................................................................................................... 20
Third-‐party 3D applications and GPU benchmark tools and blogs .................................................................... 28
Related Documents in this Series ...................................................................................................................... 28
Audience In the first part of this guide, we saw how to physically install Nvidia GRID cards with graphics processing units (GPU) in compatible server hardware. In this part, we list the steps necessary to enable VMware’s software GPU virtualization, called virtual shared graphics acceleration (vSGA) on VMware vSphere. Citrix XenDesktop 7 supports vSGA with known limitations. Note that vSGA is different from the full hardware virtualization of Nvidia GPU in XenServer that we covered in part 3. This guide walks through the following topics:
• Configuration of software GPU drivers on vSphere • Install, configure and assign GPU to a XenDesktop 7 Windows Desktop VM • Verify 3D applications are using the GPU • Install and publish GPU-‐accelerated Virtual Desktops (VDI) using Desktop Studio • Access GPU-‐accelerated Virtual Desktops (VDI) from Citrix Receiver on any device
It is assumed that the reader has good knowledge of networking, virtualization, server hardware, and Windows administration. Familiarity with Citrix and Nvidia products is recommended but not essential to complete these steps. Please see the resources section for more information.
Related Documents in this Series Part 1: XenServer GPU pass-‐through for Citrix XenDesktop 7 (includes, physical installation of GPU cards) Part 2: vSphere GPU pass-‐through (a.k.a vDGA) for Citrix XenDesktop 7 Part 3: XenServer GPU virtualization (a.k.a vGPU) for Citrix XenDesktop 7 Part 4: vSphere shared GPU (a.k.a vSGA) for Citrix XenDesktop 7
About the Authors Pushpal Ray and Mayunk Jain in the Technical Marketing team of Citrix XenDesktop produced this guide. Pushpal (@pushpalray) is a Technical Marketing Engineer with over 10 years experience in 3D graphics, infrastructure management, and virtualization. Mayunk (@mayunkj) is responsible for competitive marketing, technical demos, and sales enablement for the desktop and cloud solutions at Citrix.
Lab Environment
Hardware Graphical Processing Unit (GPU) NVIDIA GRID K1 (K1 and K2 Specs) Server hardware Dell R720 (PowerEdge R720 Technical Guide) GPU Installation Kit • Power Cables (2 – Internal for GPU)
• Heat Sink Storage Local/ NFS
Software Hypervisor(s) VMware ESXi 5.1.0 build 1065491 NVIDIA GPU ‘vSphere 5.1’ driver 304.76 (GRID K1) Guest OS Windows 7 Service Pack 1
Windows 8 Enterprise Supported Graphics Cards For a list of supported hardware that has been tested and proven to work with vSGA, please visit the VMware Compatibility Guide. Go to Control Panel à Add/Remove Programs and ensure the following components are updated on your target virtual machine before you begin the 3D optimization process.
Tools and Applications Hypervisor Tools (latest) VMware Tools Windows Applications Adobe Flash Player
Adobe Reader Java Plugin Microsoft .NET Framework 4 (latest)
GPU statistics ( and/or free third-‐party utilities) NVIDIA System Management Interface (NVIDIA-‐SMI), included in the NVIDIA Driver Microsoft Process Explorer (View à System Information à GPU tab) OpenGL Viewer
VMware: vSGA – Virtual Shared Graphics Acceleration Source: VMware Horizon View Graphics Acceleration Deployment Guide [PDF] vSGA provides the ability for multiple virtual machines to leverage physical GPUs installed locally in the ESXi hosts to provide hardware-‐accelerated 3D graphics. We will enable vSGA (also called GPU sharing or Shared GPU) for a Windows desktop virtual machine (VDI) that will host the 3D applications to be delivered using XenDesktop 7. As per the VMware deployment guide: The maximum amount of video memory that can be assigned per virtual machine is 512MB. However, the video memory allocation is evenly divided. Half the video memory is reserved on the hardware GPU, while the other half is reserved via host RAM. (Take this into consideration when sizing your ESXi host RAM.) Use this rule to calculate basic consolidation ratios. For example, the NVIDIA Quadro 4000 card has 2GB of GPU RAM. If all virtual machines are configured with 512MB of video memory, half of which (256MB) is reserved on the GPU, you can calculate that a maximum of eight virtual machines can run on that specific GPU at any given time.
To know the consolidation ratio for the vSGA Memory Allocation and VMs per GRID card, let’s take below example: In vSphere, if the default Video Memory (VRAM) allocated is 64MB per VM. The total GRID K1 Memory is 16GB (4GB per GPU multiplied by 4 GPUs) i.e. 16 GB × 1024 = 16384 MB. Then, The total number of VDIs per GRID K1 = Total Memory in GRID ÷ (VRAM per VM ÷ 2) i.e. 16384 ÷ (64÷2) = 512 GAVDs*
If VRAM = 512 MB per VM, then Total number of VDIs per GRID K1 = 16384 ÷ (512 ÷ 2) = 64 GAVDs* *GAVD = GPU-‐accelerated Virtual Desktop To configure an ESXi host with GPU, first find the PCI ID of the graphics device by running the following command: ~ # lspci | grep -i display 00:07:00.0 Display controller: NVIDIA Corporation NVIDIAGRID K1 00:08:00.0 Display controller: NVIDIA Corporation NVIDIAGRID K1 00:09:00.0 Display controller: NVIDIA Corporation NVIDIAGRID K1 00:0a:00.0 Display controller: NVIDIA Corporation NVIDIAGRID K1 00:10:00.0 Display controller: Matrox Electronics Systems Ltd. G200eR2 ~ # 00:07:00.0 is the PCI ID of the graphics card. Confirm Successful Installation To check if the Graphics Adapter has been installed correctly, run the following command on the ESXi host. In case of GRID K1, it shows the 4 GPU cards available on the single board: ~ # esxcli hardware pci list -c 0x0300 -m 0xff
Please see Appendix for detailed command output.
VMware vSphere vSGA Configuration This section takes you through enabling vSGA (Shared GPU) at the host level and preparing the virtual machines for 3D rendering.
Enable the Host for vSGA To enable an ESXi host for GPU Sharing, follow the documented checks and steps in the following section. (Optional Step) Check VT-‐d or AMD IOMMU Is Enabled
[Note: This step is only required when the server hardware is new and hypervisor is not yet installed.] Before pass-‐through can be enabled, check if VT-‐d or AMD IOMMU is enabled on the host by running the following command, either via SSH or on the console. (Note: replace <module_name> with the name of the module: vtddmar for Intel, AMDiommu for AMD). # esxcfg-module –l | grep <module_name> If above does not give any output, then browse to the below location to verify either vtddmar or AMDiommu is listed depending on your server hardware. /usr/lib/vmware/vmkmod # ls AMDIommu filedriver megaraid_mbox aacraid fnic megaraid_sas
adp94xx forcedeth migrate ahci hbr_filter mpt2sas If the appropriate module is not present, you might have to enable it in the BIOS, or your hardware might not be capable of providing PCI passthrough.
BIOS check for AMD-‐V on a Dell R720 server
BIOS check for Intel-‐VT on a Supermicro server
vSGA installation on VMware ESXi host This section takes you through the steps required to install the NVIDIA driver (VIB) on an ESXi host.
NVIDIA GRID driver installation on ESXi host Steps involved to install the NVIDIA GRID GPU driver for vSphere 5.1 hypervisor environment:
1. Download the NVIDIA vSphere VIBs for vSGA from here. 2. Extract the bundle (.ZIP) and upload the .VIB file to a datastore on the ESXi host. 3. Enter the ESXi host into maintenance mode. Else, you get the following error:
~ # esxcli software vib install -v /vmfs/volumes/LocalStorage14/NVIDIA-VMware-304.76-1OEM.510.0.0.802205.x86_64.vib [MaintenanceModeError] MaintenanceMode is required to remove: []; install: [NVIDIA_bootbank_NVIDIA-VMware_ESXi_5.1_Host_Driver_304.76-1OEM.510.0.0.802205]. Please refer to the log file for more details.
4. Log in as root to the ESXi console through SSH 5. Run the following command to install drivers from the VIB file (this requires an absolute path):
esxcli software vib install –v /vmfs/volumes/datastore/async-driver.vib
Here is an example of the complete command: [You get the following lines if driver installation is successful] ~ # esxcli software vib install -v /vmfs/volumes/LocalStorage14/NVIDIA-VMware-304.76-1OEM.510.0.0.802205.x86_64.vib Installation Result Message: Operation finished successfully. Reboot Required: false VIBs Installed: NVIDIA_bootbank_NVIDIA-VMware_ESXi_5.1_Host_Driver_304.76-1OEM.510.0.0.802205 VIBs Removed: VIBs Skipped: ~ #
6. The XORG service will not start unless the host is rebooted. ~ # /etc/init.d/xorg status Xorg is not running
7. Reboot the ESXi host. 8. Post reboot, verify XORG service is running
~ # /etc/init.d/xorg status Xorg is running
9. Exit maintenance mode. Note: An ESX host can be updated remotely using the esxcli utility, which is part of the vSphere CLI. For more vSGA Post-‐Installation Checks This section contains various commands that can be used to ensure that the GPU card and its respective drivers have been installed correctly. SSH to the ESXi host and use the following commands to verify if GPU and the hypervisor are working interactively: Xorg Xorg is a full featured X server that was originally designed for UNIX and UNIX-‐like operating systems running on Intel x86 hardware. It now runs on a wider range of hardware and OS platforms including ESXi. The status of Xorg can be checked using the following command in an SSH session: # /etc/init.d/xorg status
If Xorg is not started, run the following command to start it: # /etc/init.d/xorg start
gpuvm This command helps to lists of working GPUs that shows the virtual machines using each GPU, and the amount of video memory reserved for each one.
Observations:
o As per consolidation ratio formula used by VMware, 512 ÷ 2 = 256 MB (× 1024 KB) = 262144 KB (highlighted in RED
o The virtual desktops when powered on for the first time, gets dynamically (first-‐come first-‐serve basis) assigned to one of the four GPUs in the GRID irrespective of the type of guest operating system.
o Note: If GPU memory is assigned to VMs, then the GPU maximum memory and GPU memory left values will NOT be the same. In this example, since there are not powered on VMs, therefore both the values are showing as same.
~ # gpuvm Xserver unix:0, GPU maximum memory 4173824KB pid 146024, VM "K1W7VM03", reserved 262144KB of GPU memory. pid 101439, VM "K1W8VM03", reserved 262144KB of GPU memory. pid 101441, VM "K1W8VM04", reserved 262144KB of GPU memory. GPU memory left 3387392KB. Xserver unix:1, GPU maximum memory 4173824KB pid 58267, VM "XD7W8", reserved 262144KB of GPU memory. pid 142731, VM "K1W7VM02", reserved 262144KB of GPU memory. pid 142777, VM "K1W7VM04", reserved 262144KB of GPU memory. GPU memory left 3387392KB. Xserver unix:2, GPU maximum memory 4173824KB pid 101440, VM "K1W8VM01", reserved 262144KB of GPU memory. pid 144374, VM "XD7W7", reserved 131072KB of GPU memory. GPU memory left 3780608KB. Xserver unix:3, GPU maximum memory 4173824KB pid 101438, VM "K1W8VM02", reserved 262144KB of GPU memory. pid 142730, VM "K1W7VM01", reserved 262144KB of GPU memory. GPU memory left 3649536KB. nvidia-‐smi To see how much of each GPU is in use, issue the following command in an SSH session: # nvidia-smi This will show several details of GPU usage at the point in time when you issued the command (this display is not dynamic and must be reissued to update the information). You can also issue the following command: ~ # nvidia-smi Fri Aug 30 01:11:15 2013 +------------------------------------------------------+ | NVIDIA-SMI 4.304.76 Driver Version: 304.76 | |-------------------------------+----------------------+----------------------+ | GPU Name | Bus-Id Disp. | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GRID K1 | 0000:07:00.0 Off | N/A | | 30% 30C P8 14W / 117W | 5% 185MB / 4095MB | 12% Default | +-------------------------------+----------------------+----------------------+ | 1 GRID K1 | 0000:08:00.0 Off | N/A | | 30% 29C P8 13W / 117W | 14% 580MB / 4095MB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 GRID K1 | 0000:09:00.0 Off | N/A | | 30% 23C P8 13W / 117W | 2% 67MB / 4095MB | 6% Default | +-------------------------------+----------------------+----------------------+ | 3 GRID K1 | 0000:0A:00.0 Off | N/A | | 30% 25C P8 13W / 117W | 2% 65MB / 4095MB | 0% Default | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Compute processes: GPU Memory | | GPU PID Process name Usage | |=============================================================================| | No running compute processes found | +-----------------------------------------------------------------------------+ ~ # # watch –n 1 nvidia-smi This command will issue the nvidia-‐smi command every second to provide a refresh of that point-‐in-‐time information. # nvidia-smi -f <filepath>.log -l sec -f FILE, --filename=FILE
Modify the -‐q option. Redirect query output to the specified file in place of the default stdout. The specified file will be overwritten.
-x, --xml-format
Modify the -‐q option. Produce XML output in place of the default human-‐readable format. Both GPU and Unit query outputs conform to corresponding DTDs, which are available in the online documentation.
-l SEC, --loop=SEC Modify the -‐q option. Continuously report query data at the specified interval, rather than the default of just once. The application will sleep in-‐between queries. Pressing Ctrl+C at any time will abort the loop, which will otherwise run indefinitely. If no argument is specified for the -‐l form a default interval of 5 seconds is used.
References: Ubuntu Manuals NVIDIA Developer Zone
Pre-‐requisite -‐ Base Image for vSGA To enable a virtual machine for vSGA, follow the documented checks and steps in the following section. Update to Hardware Version 9 You must upgrade all 3D virtual machines to Hardware version 9 (HWv9 shows as vmx-‐09) to ensure maximum compatibility.
Pre Virtual Hardware upgrade: Virtual Machine Version is 8
From vCenter: àRight-‐click the virtual machine to be upgraded àSelect Upgrade Virtual Hardware
Upgrade Warning The virtual hardware version upgrade is an irreversible process. You may ignore this message.
Post Virtual Hardware upgrade: Virtual Machine Version is vmx-‐09
Master Image Settings -‐ Enable 3D Support and Configure Video Memory As per VMware deployment guide:
The maximum amount of video memory that can be assigned per virtual machine is 512MB. However, the video memory allocation is evenly divided. Half the video memory is reserved on the hardware GPU, while the other half is reserved via host RAM. (Take this into consideration when sizing your ESXi host RAM.) Use this rule to calculate basic consolidation ratios. For example, the NVIDIA Quadro 4000 card has 2GB of GPU RAM. If all virtual machines are configured with 512MB of video memory, half of which (256MB) is reserved on the GPU, you can calculate that a maximum of eight virtual machines can run on that specific GPU at any given time.
The ESXi host reserves GPU hardware resources on a first-‐come, first-‐served basis as virtual machines are powered on. If all GPU hardware resources are already reserved, additional virtual machines will be unable to power on if they are explicitly set to use hardware 3D rendering. If the virtual machines are set to Automatic, the virtual machines will be powered on using software 3D rendering.
Ensure Video memory is set as per your requirement. The amount of GPU memory allocation depends on the Video Memory (VRAM). Tick mark Enable 3D support
Configure vSGA advanced settings using VMware vSphere Web Client To configure vSGA in the VMware vSphere 5.1 web interface, there are three 3D rendering options: Capabilities Automatic (default) Software Hardware(GPU) Hardware 3D Rendering YES NO YES Software 3D Rendering YES YES NO vSphere vMotion YES YES YES if compatible GPU within hosts vSphere HA YES YES YES if compatible GPU within hosts
Steps to get started with vSphere Web Client:
Install VMware Web Client à Right-‐click autorun.exe in the VMware vCenter Installer directory àEnsure vCenter Single Sign On is installed in the environment and keep the admin@system-‐domain password handy. It is mandatory to enter the SSO password during web client setup. àClick VMware vSphere Web Client and follow the wizards to complete installation. Note For POC, you may install web client in vCenter server machine.
Login to vCenter using vSphere Client. Highlight vCenter, go to Permissions tab. Right-‐click à Add permissions àclick Add
Select SYSTEM-‐DOMAIN from drop-‐down à select & double-‐click admin à Click OK
SYSTEM-‐DOMAIN\admin added to vCenter with Administrator ROLE privilege
Note Alternate way is to use windows domain credentials to authenticate to web client, which requires you to download/install the vCenter Integration plugin to login using windows domain credentials. Download is available on the web client logon page.
To connect to Web Client, go to https://<SSO/vCenter Server FQDN or IP>:9443/vsphere-‐client/ Login using one of the ways: SYSTEM-‐DOMAIN\admin OR Windows domain credentials
In Web Client, the left hand panel shows the vCenter Inventory tree. Navigate through the objects for the administrative tasks.
Navigate to Virtual Machine, highlight master/base/golden image à Edit Settings Expand Video Card (by default set to Auto-‐detect settings) Set the appropriate Resolution
Set the appropriate Number of displays based on what you’re trying to test and/or your requirement. Notes o The video card related
settings can be changed only when the VM is in Powered-‐Off state.
o More the no. of displays, more will be Video Memory (VRAM) requirement
o More the resolution, more will be VRAM requirement.
o The 3D Renderer settings can only be changed from Web Client or by modifying the virtual machine configuration file (.vmx).
3D Renderer settings using VMware vSphere Web Client interface.
Installation of XenDesktop 7 and Delivering GPU-‐accelerated virtual desktop o Install HDX 3D Pro Virtual Desktop Agent (VDA) in the base image for windows desktop OS – Windows
7 and 8, on the guest OS.
Ensure Device Manager shows the display adapter drivers installed: o VMware SVGA 3D from VMware Tools o Citrix Display Drivers from HDX 3D Pro
VDA agent
o Please see the XenDesktop 7 Reviewer’s Guide for step-‐by-‐step instructions on installing the virtual
desktop agent and other Citrix XenDesktop components such as the Studio.
Launch GPU-‐accelerated virtual desktop from Citrix Receiver This section shows the users launching 3D applications published with XenDesktop 7 Apps (formerly, XenApp) using Citrix Receiver on the end-‐point devices. In this example, we launch multiple sessions of Unigine Heaven 3D and Google Earth, freely available demo apps, from XenDesktop server hosted on both VMware vSphere and Citrix XenServer (with GPU enabled, as seen previously). 3D Application Unigine Heaven, eDrawings, Solidworks Monitoring Tools used o NVIDIA-‐SMI
o Process Explorer with GPU monitoring enabled No. of XenApp sessions (users) tested 2 and 4 GPU Card GRID K1
Launch desktops and applications on Windows client Citrix Receiver is the unified access client to access applications and desktops from StoreFront. With a user account, you will access those applications and desktops.
# Screen capture Instructions
On a client machine, Windows 7 in this case, open a browser and go to the default Storefront URL http://<XenDesktopDeliveryControllerIPorFQDN>/Citrix/StoreWeb
If Citrix Receiver is not already installed on the client, you are prompted to install it. Accept the EULA, Click Install and follow the installation process. Return to the login page once it is installed.
# Screen capture Instructions
Login using valid domain user credentials. Username: domain\user OR [email protected] Enter Password, click Log On Based on user permission in delivery group, you will see list of VDI pools such as static, random etc. under the Desktops section
APPS By default the page will show Desktops. Go to App section (bottom middle of page), click on All Apps and Select any application. In this example, a 3D gaming app Unigine Heaven is added. Once Added, Click on the
# Screen capture Instructions
DESKTOPS Click on the GPU-‐accelerated VDI pool e.g. GAVDW7 as shown in the figure. The circle turns green which means there is a desktop being prepared for you from the pool
Desktop Viewer window pops up. The pool will initiate a connection to one of the virtual desktop.
XenDesktop 7 virtual desktop running DirectX9 3D Game Unigine Heaven on VMware vSphere powered with NVIDIA GRID K1 GPU
Multiple VDI sessions running Unigine Heaven 3D game and NVIDIA-‐SMI showing the impact on GRID K1 GPUs with the following info :
o GPU Utilization à 12%, 6%, 0% and 19% o Memory Usage à 482MB, 555MB, 31MB and 428MB o Power Usage à 14W, 14W, 13W and 14W o Temperature à 29C, 29C, 22C and 25C
Summary In this first part of the HDX 3D Pro Reviewer’s Guide, we learnt how to identify the different hardware components of an HDX 3D Pro solution and complete the physical installation. In this document, we configured VMware’s vSphere hypervisor with graphics drivers to support software virtualization of the GPU, and tested the GPU being ready for use inside the virtual machine (VM). Please refer to the XenDesktop 7 Reviewer’s Guide to learn how these VMs act as the base image for HDX 3D delivery using Citrix XenDesktop. It explains the steps for setting up the XenDesktop infrastructure and accessing applications from thin-‐clients and standard PCs using Citrix Receiver.
Appendix
ESXi commands for NVIDIA GPU ESXi Commands Description gpuvm Show what VMs are using gpu(s) Esxcli software vib install –v /path-‐tovib/ name-‐of-‐vib.vib Loads the NVIDIA VIB esxcli software vib list | grep NVIDIA Verify NVIDIA vib is installed Esxcli system module load –module nvidia Verify NVIDIA module loads Esxcli hardware pci list –c 0x300 –m 0xff Verify devices are present nvidia-‐smi General status of the GPU / driver version
gpuvm This command will show the list of View desktops (online/powered on), assigned/allocated certain amount (KB) of GPU memory equally across all the View desktops in the pool. Below is the output when VDI pool exists on hypervisor with GRID card and GPU memory (frame buffer) assigned based on Video Memory (VRAM). In this example, VRAM is set to 512MB: ~ # gpuvm Xserver unix:0, GPU maximum memory 4173824KB pid 146024, VM "K1W7VM03", reserved 262144KB of GPU memory. pid 101439, VM "K1W8VM03", reserved 262144KB of GPU memory. pid 101441, VM "K1W8VM04", reserved 262144KB of GPU memory. GPU memory left 3387392KB. Xserver unix:1, GPU maximum memory 4173824KB pid 58267, VM "XD7W8", reserved 262144KB of GPU memory. pid 142731, VM "K1W7VM02", reserved 262144KB of GPU memory. pid 142777, VM "K1W7VM04", reserved 262144KB of GPU memory. GPU memory left 3387392KB. Xserver unix:2, GPU maximum memory 4173824KB pid 101440, VM "K1W8VM01", reserved 262144KB of GPU memory. pid 144374, VM "XD7W7", reserved 131072KB of GPU memory. GPU memory left 3780608KB. Xserver unix:3, GPU maximum memory 4173824KB pid 101438, VM "K1W8VM02", reserved 262144KB of GPU memory. pid 142730, VM "K1W7VM01", reserved 262144KB of GPU memory. GPU memory left 3649536KB. Below output is from an ESXi console with GRID K1 card without a VDI pool: ~ # gpuvm Xserver unix:0, GPU maximum memory 4173824KB GPU memory left 4173824KB. Xserver unix:1, GPU maximum memory 4173824KB GPU memory left 4173824KB. Xserver unix:2, GPU maximum memory 4173824KB GPU memory left 4173824KB. Xserver unix:3, GPU maximum memory 4173824KB GPU memory left 4173824KB.
nvidia-‐smi This command will tell you allocated VRAM, GPU temperature etc. More importantly it will show the %utilization of the GPU. The VM is idle. Hence Volatile GPU-‐Util shows 0%. Example: ~ # nvidia-smi Thu May 9 12:32:19 2013 +------------------------------------------------------+ | NVIDIA-SMI 4.304.76 Driver Version: 304.76 | |-------------------------------+----------------------+----------------------+ | GPU Name | Bus-Id Disp. | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro 4000 | 0000:06:00.0 Off | N/A | | 36% 40C P12 N/A / N/A | 12% 245MB / 2047MB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Compute processes: GPU Memory | | GPU PID Process name Usage | |=============================================================================| | No running compute processes found | +-----------------------------------------------------------------------------+
Below output is from a ESXi host with GRID K1 card: ~ # nvidia-smi Wed Aug 28 17:23:39 2013 +------------------------------------------------------+ | NVIDIA-SMI 4.304.76 Driver Version: 304.76 | |-------------------------------+----------------------+----------------------+ | GPU Name | Bus-Id Disp. | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GRID K1 | 0000:07:00.0 Off | N/A | | 30% 29C P8 13W / 117W | 0% 11MB / 4095MB | 0% Default | +-------------------------------+----------------------+----------------------+ | 1 GRID K1 | 0000:08:00.0 Off | N/A | | 30% 28C P8 13W / 117W | 0% 11MB / 4095MB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 GRID K1 | 0000:09:00.0 Off | N/A | | 30% 23C P8 13W / 117W | 0% 11MB / 4095MB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 GRID K1 | 0000:0A:00.0 Off | N/A | | 30% 25C P8 13W / 117W | 0% 11MB / 4095MB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Compute processes: GPU Memory | | GPU PID Process name Usage | |=============================================================================| | No running compute processes found | +-----------------------------------------------------------------------------+
watch nvidia-‐smi This command will continually poll the GPU and display % utilization every 2 or 3 seconds (Real-‐time). The VM is playing a YouTube video, the Volatile GPU-‐Util shows 32%. The values vary real-‐time.
Every 2s: nvidia-smi 2013-05-09 12:41:15 Thu May 9 12:41:15 2013 +------------------------------------------------------+ | NVIDIA-SMI 4.304.76 Driver Version: 304.76 | |-------------------------------+----------------------+----------------------+ | GPU Name | Bus-Id Disp. | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Quadro 4000 | 0000:06:00.0 Off | N/A | | 36% 40C P12 N/A / N/A | 12% 245MB / 2047MB | 32% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Compute processes: GPU Memory | | GPU PID Process name Usage | |=============================================================================| | No running compute processes found | +-----------------------------------------------------------------------------+
xorg This command tells the status of the xorg service if running or not. This service must be running to get the GPU operational in the hypervisor. Status ~ # /etc/init.d/xorg status Xorg is not running ~ # /etc/init.d/xorg status Xorg is running ~ # Start ~ # /etc/init.d/xorg start Xorg0 started Restart ~ # /etc/init.d/xorg restart watchdog-Xorg0: Terminating watchdog process with PID 98577 Process 98600 stopped Xorg0 started
esxcli software vib list | grep nvidia This command shows the NVIDIA driver related information and also tells that the driver is installed in the hypervisor. Example: ~ # esxcli software vib list | grep NVIDIA NVIDIA-VMware_ESXi_5.1_Host_Driver 304.76-1OEM.510.0.0.802205 NVIDIA VMwareAccepted 2013-08-28 ~ # If there is no NVIDIA GPU driver installed on the hypervisor, then the command will give no error/output and cursor goes into next line. Below is output for your reference: ~ # esxcli software vib list | grep NVIDIA ~ #
esxcli system module list | grep nvidia This command confirms that the NVIDIA driver is successfully installed on the hypervisor. ~ # esxcli system module list | grep nvidia nvidia true true
esxcli hardware pci list –c 0x300 –m 0xff This command is used to verify if the NVIDIA driver has been installed and loaded successfully. In the below output, if the Current Owner is VMkernel, that means the GPU is in vSGA mode. Example: ~ # esxcli hardware pci list -c 0x0300 -m 0xff 000:007:00.0 Address: 000:007:00.0 Segment: 0x0000 Bus: 0x07 Slot: 0x00 Function: 0x00 VMkernel Name: Vendor Name: NVIDIA Corporation Device Name: NVIDIAGRID K1 Configured Owner: Unknown Current Owner: VMkernel Vendor ID: 0x10de Device ID: 0x0ff2 SubVendor ID: 0x10de SubDevice ID: 0x099d Device Class: 0x0300 Device Class Name: VGA compatible controller Programming Interface: 0x00 Revision ID: 0xa1 Interrupt Line: 0x0f IRQ: 15 Interrupt Vector: 0xc0 PCI Pin: 0x6f Spawned Bus: 0x00 Flags: 0x0201 Module ID: 74 Module Name: nvidia Chassis: 0 Physical Slot: 8 Slot Description: Passthru Capable: true Parent Device: PCI 0:6:8:0 Dependent Device: PCI 0:6:8:0 Reset Method: Bridge reset FPT Sharable: true
If the GRID card is in passthrough mode, then the output will show the Configured and Current Owner as VM Passthru. For reference, below is the output of GRID K1 in passthrough mode: ~ # esxcli hardware pci list -c 0x300 -m 0xFF 000:007:00.0 Address: 000:007:00.0 Segment: 0x0000 Bus: 0x07 Slot: 0x00 Function: 0x00 VMkernel Name: Vendor Name: NVIDIA Corporation Device Name: GK107 [VGX K1] Configured Owner: VM Passthru Current Owner: VM Passthru
[vSGA] If the Current Owner is VMkernel, that means the GPU is in non-‐passthrough mode. [vDGA] If the Current Owner is VM Passthru, that means the GPU is in passthrough mode.
Vendor ID: 0x10de Device ID: 0x0ff2 SubVendor ID: 0x10de SubDevice ID: 0x099d Device Class: 0x0300 Device Class Name: VGA compatible controller Programming Interface: 0x00 Revision ID: 0xa1 Interrupt Line: 0x0f IRQ: 15 Interrupt Vector: 0x69 PCI Pin: 0x00 Spawned Bus: 0x00 Flags: 0x0401 Module ID: -1 Module Name: None Chassis: 0 Physical Slot: 8 Slot Description: Passthru Capable: true Parent Device: PCI 0:6:8:0 Dependent Device: PCI 0:6:8:0 Reset Method: Bridge reset FPT Sharable: true ~ #
Confirm Successful Installation To check if the Graphics Adapter has been installed correctly, run the following command on the ESXi host. In case of GRID K1, it shows the 4 GPU cards available on the single board: ~ # esxcli hardware pci list -c 0x0300 -m 0xff 000:007:00.0 Address: 000:007:00.0 Segment: 0x0000 Bus: 0x07 Slot: 0x00 Function: 0x00 VMkernel Name: Vendor Name: NVIDIA Corporation Device Name: GK107 [VGX K1] Configured Owner: Unknown Current Owner: VMkernel Vendor ID: 0x10de Device ID: 0x0ff2 SubVendor ID: 0x10de SubDevice ID: 0x099d Device Class: 0x0300 Device Class Name: VGA compatible controller Programming Interface: 0x00 Revision ID: 0xa1 Interrupt Line: 0x0f IRQ: 15 Interrupt Vector: 0xc0 PCI Pin: 0xc0 Spawned Bus: 0x00
[vSGA] If the Current Owner is VMkernel, that means the GPU is in non-‐passthrough mode. [vDGA] If the Current Owner is VM Passthru, that means the GPU is in passthrough mode.
Flags: 0x0201 Module ID: -1 Module Name: None Chassis: 0 Physical Slot: 8 Slot Description: Passthru Capable: true Parent Device: PCI 0:6:8:0 Dependent Device: PCI 0:6:8:0 Reset Method: Bridge reset FPT Sharable: true 000:008:00.0 Address: 000:008:00.0 Segment: 0x0000 Bus: 0x08 Slot: 0x00 Function: 0x00 VMkernel Name: Vendor Name: NVIDIA Corporation Device Name: GK107 [VGX K1] Configured Owner: Unknown Current Owner: VMkernel Vendor ID: 0x10de Device ID: 0x0ff2 SubVendor ID: 0x10de SubDevice ID: 0x099d Device Class: 0x0300 Device Class Name: VGA compatible controller Programming Interface: 0x00 Revision ID: 0xa1 Interrupt Line: 0x0e IRQ: 14 Interrupt Vector: 0xc8 PCI Pin: 0xc8 Spawned Bus: 0x00 Flags: 0x0201 Module ID: -1 Module Name: None Chassis: 0 Physical Slot: 9 Slot Description: Passthru Capable: true Parent Device: PCI 0:6:9:0 Dependent Device: PCI 0:6:9:0 Reset Method: Bridge reset FPT Sharable: true 000:009:00.0 Address: 000:009:00.0 Segment: 0x0000 Bus: 0x09
Slot: 0x00 Function: 0x00 VMkernel Name: Vendor Name: NVIDIA Corporation Device Name: GK107 [VGX K1] Configured Owner: Unknown Current Owner: VMkernel Vendor ID: 0x10de Device ID: 0x0ff2 SubVendor ID: 0x10de SubDevice ID: 0x099d Device Class: 0x0300 Device Class Name: VGA compatible controller Programming Interface: 0x00 Revision ID: 0xa1 Interrupt Line: 0x0f IRQ: 15 Interrupt Vector: 0xc0 PCI Pin: 0x63 Spawned Bus: 0x00 Flags: 0x0201 Module ID: -1 Module Name: None Chassis: 0 Physical Slot: 16 Slot Description: Passthru Capable: true Parent Device: PCI 0:6:16:0 Dependent Device: PCI 0:6:16:0 Reset Method: Bridge reset FPT Sharable: true 000:00a:00.0 Address: 000:00a:00.0 Segment: 0x0000 Bus: 0x0a Slot: 0x00 Function: 0x00 VMkernel Name: Vendor Name: NVIDIA Corporation Device Name: GK107 [VGX K1] Configured Owner: Unknown Current Owner: VMkernel Vendor ID: 0x10de Device ID: 0x0ff2 SubVendor ID: 0x10de SubDevice ID: 0x099d Device Class: 0x0300 Device Class Name: VGA compatible controller Programming Interface: 0x00 Revision ID: 0xa1 Interrupt Line: 0x0e
IRQ: 14 Interrupt Vector: 0xc8 PCI Pin: 0x00 Spawned Bus: 0x00 Flags: 0x0201 Module ID: -1 Module Name: None Chassis: 0 Physical Slot: 17 Slot Description: Passthru Capable: true Parent Device: PCI 0:6:17:0 Dependent Device: PCI 0:6:17:0 Reset Method: Bridge reset FPT Sharable: true ~ # If the NVIDIA GPU is not listed in the above output, then GPU card is either not installed correctly and/or is malfunctioning. Also, ensure the Xorg service is up and running.
Third-‐party 3D applications and GPU benchmark tools and blogs [Note: These are utilities found on the Internet and not provided by Citrix. Citrix does not guarantee or support use of these tools.] Third-‐party tools URLs 3DMark http://www.3dmark.com/
Download: location1 or location2 Geeks3D http://www.geeks3d.com/
http://www.geeks3d.com/20130719/furmark-‐1-‐11-‐0-‐gpu-‐vga-‐videocard-‐burn-‐in-‐stress-‐test-‐opengl-‐benchmark-‐utility-‐nvidia-‐geforce-‐amd-‐radeon/ http://www.geeks3d.com/20110408/download-‐tessmark-‐0-‐3-‐0-‐released/ http://www.geeks3d.com/20130308/fluidmark-‐1-‐5-‐1-‐physx-‐benchmark-‐fluid-‐sph-‐simulation-‐opengl-‐download/ http://www.geeks3d.com/20120511/geexlab-‐0-‐4-‐0-‐ultim8-‐edition-‐available-‐gtx-‐600-‐opengl-‐bindless-‐textures-‐support-‐added/ http://www.geeks3d.com/20110719/quick-‐test-‐process-‐explorer-‐15-‐0-‐with-‐gpu-‐support/
Aquamark http://downloads.guru3d.com/download.php?det=673 3dmark http://www.futuremark.com/benchmarks/ Lightsmark http://dee.cz/lightsmark/ Furmark http://www.ozone3d.net/benchmarks/fur/
GPU Shark: http://www.ozone3d.net/gpushark/ GPU –Z: http://www.techpowerup.com/gpuz/
Demo Apps Unigine http://unigine.com/products/heaven/download/ Google Earth http://www.google.com/earth eDrawings http://www.edrawingsviewer.com/ed/edrawings-‐
samples.htm Adobe Photoshop (trial) http://www.adobe.com/photoshop Autodesk Inventor http://www.autodesk.com/inventor
Related Documents in this Series Part 1: XenServer GPU pass-‐through for Citrix XenDesktop 7 (includes, physical installation of GPU cards) Part 2: vSphere GPU pass-‐through (a.k.a vDGA) for Citrix XenDesktop 7 Part 3: XenServer GPU virtualization (a.k.a vGPU) for Citrix XenDesktop 7 Part 4: vSphere shared GPU (a.k.a vSGA) for Citrix XenDesktop 7