performance fine-tuning technical notes - dell · 2020-07-06 · performance tuning is very...
Post on 10-Jul-2020
31 Views
Preview:
TRANSCRIPT
Dell EMC VxFlex OSVersion 2.x
Performance Fine-Tuning Technical NotesP/N 302-002-680
REV 08
Copyright © 2016-2018 Dell Inc. or its subsidiaries. All rights reserved.
Published June 2018
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS.“ DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
Dell EMCHopkinton, Massachusetts 01748-91031-508-435-1000 In North America 1-866-464-7381www.DellEMC.com
2 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
5
Preface 7
Overview 9Introduction................................................................................................ 10Tuning VxFlex OS for Best Performance.....................................................10
VxFlex OS performance fine-tuning tasks 11Performance tuning pre-installation............................................................ 12Performance tuning post-installation.......................................................... 12
Upgrades........................................................................................12Tuning considerations.................................................................... 12
VxFlex OS system changes 15Using the set_performance_parameters utility for MDM and SDS............. 16Caching Updates for VxFlex OS 2.x.............................................................17Read RAM Cache settings for SDS............................................................. 17Read Flash Cache settings for SDS.............................................................18Manual SDC configuration/verification....................................................... 19Jumbo Frames and the potential impact on performance........................... 20
Jumbo Frame configuration for Linux............................................ 20Jumbo Frame configuration for Windows....................................... 21Jumbo Frame configuration for ESXi............................................. 22Jumbo Frame configuration for Storage Virtual Machine...............23
OS system tuning recommendations 25Optimizing ESXi..........................................................................................26Optimizing Linux.........................................................................................26
Change the GRUB template for Skylake GPUs.............................. 26Optimizing the Storage Virtual Machine..................................................... 27Optimizing VM guests................................................................................ 28
I/O scheduler.................................................................................28Paravirtual SCSI controller............................................................ 29Queue length................................................................................. 29
Optimizing Windows...................................................................................30
Reference material 33VxFlex OS Performance Parameters.......................................................... 34
Figures
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
CONTENTS
VxFlex OS 2.x Performance Fine-Tuning Technical Notes 3
CONTENTS
4 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
Test ping to ensure mtu setting...................................................................................21Show interface output................................................................................................ 21Adapter properties......................................................................................................22Ping output.................................................................................................................22Test ping to ensure mtu setting..................................................................................24Virtual Machine Properties......................................................................................... 28Customize Power Plan................................................................................................ 31
1234567
FIGURES
VxFlex OS 2.x Performance Fine-Tuning Technical Notes 5
FIGURES
6 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
Preface
As part of an effort to improve its product lines, Dell EMC periodically releasesrevisions of its software and hardware. Therefore, some functions described in thisdocument might not be supported by all versions of the software or hardwarecurrently in use. The product release notes provide the most up-to-date informationon product features.
Contact your Dell EMC technical support professional if a product does not functionproperly or does not function as described in this document.
Note
This document was accurate at publication time. Go to Dell EMC Online Support(https://support.emc.com) to ensure that you are using the latest version of thisdocument.
Previous versions of Dell EMC VxFlex OS were marketed under the name Dell EMCScaleIO.
Similarly, previous versions of Dell EMC VxFlex Ready Node were marketed under thename Dell EMC ScaleIO Ready Node.
References to the old names in the product, documentation, or software, etc. willchange over time.
Note
Software and technical aspects apply equally, regardless of the branding of theproduct.
Related documentationThe release notes for your version includes the latest information for your product.
The following Dell EMC publication sets provide information about your VxFlex OS orVxFlex Ready Node product:
l VxFlex OS software (downloadable as VxFlex OS Software <version>Documentation set)
l VxFlex Ready Node with AMS (downloadable as VxFlex Ready Node with AMSDocumentation set)
l VxFlex Ready Node no AMS (downloadable as VxFlex Ready Node no AMSDocumentation set)
l VxRack Node 100 Series (downloadable as VxRack Node 100 SeriesDocumentation set)
You can download the release notes, the document sets, and other relateddocumentation from Dell EMC Online Support.
Typographical conventionsDell EMC uses the following type style conventions in this document:
Bold Used for names of interface elements, such as names of windows,dialog boxes, buttons, fields, tab names, key names, and menu paths(what the user specifically selects or clicks)
Preface 7
Italic Used for full titles of publications referenced in text
Monospace Used for:
l System code
l System output, such as an error message or script
l Pathnames, filenames, prompts, and syntax
l Commands and options
Monospace italic Used for variables
Monospace bold Used for user input
[ ] Square brackets enclose optional values
| Vertical bar indicates alternate selections - the bar means “or”
{ } Braces enclose content that the user must specify, such as x or y orz
... Ellipses indicate nonessential information omitted from the example
Where to get helpDell EMC support, product, and licensing information can be obtained as follows:
Product information
For documentation, release notes, software updates, or information about DellEMC products, go to Dell EMC Online Support at https://support.emc.com.
Technical support
Go to Dell EMC Online Support and click Service Center. You will see severaloptions for contacting Dell EMC Technical Support. Note that to open a servicerequest, you must have a valid support agreement. Contact your Dell EMC salesrepresentative for details about obtaining a valid support agreement or withquestions about your account.
Your commentsYour suggestions will help us continue to improve the accuracy, organization, andoverall quality of the user publications. Send your opinions of this document to techpubcomments@emc.com.
Preface
8 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
CHAPTER 1
Overview
This section provides an overview of VxFlex OS performance fine-tuning.
l Introduction........................................................................................................ 10l Tuning VxFlex OS for Best Performance............................................................ 10
Overview 9
IntroductionVxFlex OS is a software-only solution that uses existing servers' local disks and LAN tocreate a virtual SAN that has all the benefits of external storage—but at a fraction ofthe cost and complexity. VxFlex OS utilizes the existing local internal storage andturns it into internal shared block storage. For many workloads, VxFlex OS storage iscomparable to, or better than external shared block storage.
This document describes best practices for maximizing performance in high-performance (more than 60,000 IOPs) VxFlex OS v2.x environments.
Tuning VxFlex OS for Best PerformanceUsers can improve VxFlex OS system performance in terms of IOPs, latency, andbandwidth, by making environment-specific fine-tunings on the operating system,network, and VxFlex OS components. However, newer releases of VxFlex OS(versions 2.x and beyond) have been enhanced to include built-in performance tuningfeatures that reduce the number of, and in some cases, completely eliminate many ofthe manual tasks that were once advised. This document describes theseperformance-related best practices.
Note
Performance tuning is very case-specific. To prevent undesirable effects, it is highlyrecommended to thoroughly test all changes. For further assistance, contact https://support.emc.com.
Overview
10 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
CHAPTER 2
VxFlex OS performance fine-tuning tasks
This section provides an overview of the the different tasks required to enhanceVxFlex OS performance.
l Performance tuning pre-installation....................................................................12l Performance tuning post-installation..................................................................12
VxFlex OS performance fine-tuning tasks 11
Performance tuning pre-installationThis section describes the necessary steps to take prior to beginning an installation ofVxFlex OS to enhance performance.
The following table describes performance optimization during the installation. Foradditional installation information, refer to the VxFlex OS Installation Guide.
Installation method Task
VxFlex OS Installation Manager In the CSV file, setperfProfileFor<SDS/SDC/MDM>= HighSee note below.
Note
When installing VxFlex OS via gateway, usersmay set each component (SDS/MDM/SDC)for Normal or High by entering these words inthe PerfProfile column per component oneach row of the CSV file. This will instruct theinstallation manager to send the commandand set the profile to normal (for default), orhigh (for high_performance) during theconfigure stage. More details related toperformance profiles are discussed in Tuningconsiderations on page 12.
VMware deployment wizard No user action required. Wizard includes the2.x performance profile plugin.
Manual installation Refer to VxFlex OS specific instructionsherein.
Performance tuning post-installationThis section describes steps to take after completing a successful installation toenhance performance.
UpgradesFor any VxFlex OS upgrade from 1.3x to 2.x, all performance related settings will needto be reset. Users should implement performance tunings by following the guidelinesdescribed in this document.
Tuning considerationsNew installation procedures have been incorporated in versions 2.x and above whicheliminate many of the manual tasks involved with performance tuning VxFlex OS.Users may configure a high performance profile which will change the defaultparameters. In the past, users were instructed to modify the SDS and/or MDMconfiguration (conf.txt) files. However, those changes are no longer required. In fact,any changes made to the 2.x configuration files will no longer take effect in VxFlex OS.
VxFlex OS performance fine-tuning tasks
12 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
The main difference between the performance and standard (default) profiles are theamount of server resources (CPU and memory) that are consumed. A performanceprofile (or configuration) will always consume more resources.
This document will describe commands using the VxFlex OS command line interface(scli) to quickly and easily modify the desired performance profile.
Users will achieve optimum performance by always setting the performance profile toHigh_performance. A complete list of parameters comparing the Default and theHigh_performance profiles is available in the Appendix.
VxFlex OS performance fine-tuning tasks
Tuning considerations 13
VxFlex OS performance fine-tuning tasks
14 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
CHAPTER 3
VxFlex OS system changes
This section describes the various system changes available for enhancing VxFlex OSperformance.
l Using the set_performance_parameters utility for MDM and SDS..................... 16l Caching Updates for VxFlex OS 2.x.................................................................... 17l Read RAM Cache settings for SDS..................................................................... 17l Read Flash Cache settings for SDS.................................................................... 18l Manual SDC configuration/verification...............................................................19l Jumbo Frames and the potential impact on performance...................................20
VxFlex OS system changes 15
Using the set_performance_parameters utility for MDM andSDS
New installation procedures have been instituted in versions 2.x and above whicheliminate many of the manual tasks involved with performance tuning VxFlex OS.Users may configure a High_performance profile which will change the defaultparameters.
The following table describes the commands for specific tasks:
Task Command
To change the performance profile fromDefault to High_performance
Execute the command:
scli --set_performance_parameters --all_sds --all_sdc --apply_to_mdm --profile high_performance
To change the profile back to normal defaults Execute the command:
scli --set_performance_parameters --all_sds --all_sdc --apply_to_mdm --profile default
To view current settings Execute the command:
scli –-query_performance_parameters
To view full parameter settings for an MDM Execute the command:
scli --query_performance_parameters --print_all
To view full parameter settings of a specificSDS (this also shows the MDM settings)
Execute the command:
scli --query_performance_parameters --sds_name <NAME> --print_all
To view full parameter settings of a specificSDC (this also shows the MDM settings)
Execute the command:
scli --query_performance_parameters --sdc_name <NAME> --print_all
Note
Refer to the VxFlex OS Performance Parameters on page 34 for a list containing alldefault and performance profile parameters.
VxFlex OS system changes
16 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
Caching Updates for VxFlex OS 2.xVxFlex OS offers the following types of caching, for the purpose of enhancing systemperformance:
l RAM Read Cache (using a server’s DRAM memory)
l Read Flash Cache (using SSDs or flash PCI cards)
Note
SSDs used for caching cannot be used for storage purposes.
The following table summarizes information about the caching modes provided by thesystem.
Mode Description Considerations DefaultSetting
RAM ReadCache(rmcache)
Read-onlycachingperformed byserver RAM.
RAM Read cache, the fastest type of caching,uses RAM that is allocated for caching. Itssize is limited to the amount of allocatedRAM.
Enabled
Read FlashCache(RFCache)
Read-onlycachingperformed byone or morededicatedSSD devicesor flash PCIdrives in theserver.
Read Flash Cache uses the full capacity ofSSD or flash PCI devices (up to eight) toprovide a larger footprint of read-only LRU(least recently used) based-caching resourcesfor the SDS. This type of caching reactsquickly to workload changes to speed up HDDRead performance.Several SSD devices can be allocated to ashared cache pool, and therefore the cachesize is limited in size only to the amount ofSSDs allocated for this purpose.
Note
The RFCache driver must be installed duringdeployment. Caching devices can be definedeither during the installation process, or afterdeployment.
Disabled
Read RAM Cache settings for SDSIn version 2.x Read RAM Cache is enabled by default on the Storage Pool.Recommendation: disable it for SSD/Flash pools. For HDD pools, Read RAM Cachecan help increase performance. If the node is storage only (in other words; is the nodeis only used for VxFlex OS), then the recommendation is to turn on Read RAM Cachefor HDD pools and use as much of the server DRAM as possible.
In a converged configuration (where VxFlex OS is sharing the server with otherapplications), depending on the available DRAM resources, it may also help to turn onRead RAM Cache for HDD pools and increase the cache size from the default.
VxFlex OS system changes
Caching Updates for VxFlex OS 2.x 17
If users wish to enable/disable Read RAM Cache, perform either of the followingsteps. Read RAM Cache may be enabled/disabled on the Protection Domain, or foreach SDS in the cluster.
Task Command
To enable Read RAM cache Run one of the following commands:
l scli --set_rmcache_usage --protection_domain_name <domain NAME> --storage_pool_name <pool NAME> --use_rmcache [--dont_use_rmcache]
l scli --enable_sds_rmcache [--disable_sds_rmcache] --sds_name <NAME>
Note
Using this command would be required for every SDS in thecluster.
To increase the amount ofRead RAM cache
Run the following command:
Usage: scli --set_rmcache_size --sds_name <NAME> --protection_domain_name <NAME> --rmcache_size_mb <SIZE>
Where --rmcache_size_mb is the size of rmcache in MB, andthe range is between 128MB-300GB.It is important to ensure that Read RAM cache is enabled at alllevels (PD, SP, and SDS). When rmcache is properly enabled,query output will look like this:
Read Flash Cache settings for SDSRead Flash Cache is available in version 2.x and above. This feature is a Read Cacheused to increase read performance and buffers writes to increase the performance ofRead-after-Write I/Os. It allows users to create and configure an "RFcache" devicefor any SSD or Flash card. VxFlex OS will cache user data depending on the modeselected. This feature can greatly improve performance for specific workloads. TheRFcache device is also referred to as an accelerated device.
To create an RFcache device and configure it, use the following steps:
VxFlex OS system changes
18 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
1. Create an RFcache device (Recommendation: create 1 device per SDS).
scli --add_sds_rfcache_device --sds_name <NAME> --rfcache_device_path <device_path> --rfcache_device_name <RFcache device NAME>
2. Set the RFcache parameters (Recommendation: these parameters have a greatimpact on performance, therefore use the defaults).
scli --set_rfcache_parameters --protection_domain_name <domain NAME> --rfcache_pass_through_mode pass_through_write_miss
Note
The default settings are; Passthrough mode = Write_Miss, Page Size 64 KB, MaxIO size 128 KB.
3. Enable acceleration of a Storage Pool—accelerate all SDS devices that are in thepool:
scli --set_rfcache_usage --protection_domain_name <domain NAME> --storage_pool_name <pool NAME> --use_rfcache
For Read Flash Cache, the available modes are as follows:
l pass_through_none
l pass_through_read
l pass_through_write
l pass_through_read_and_write
l pass_through_write_miss
The default caching mode is “write-miss”. In this mode, it is essentially a write-through option where only reads and updates are cached. This mode buffers writes tothe data that was already in cache.
For more information related to using and configuring Read Flash Cache, refer to theVxFlex OS Deployment Guide and VxFlex OS User Guide at https://support.emc.com.
Manual SDC configuration/verificationIf there are problems connecting to a VxFlex OS system after a reboot, users canmodify the drv_cfg.txt file or use the drv_cfg utility to ensure that the SDCreconnects to the MDM automatically upon node restart/reboot.
To do this, perform one of the following procedures:
l Edit /bin/emc/scaleio/drv_cfg.txt and change the #MDM line to includeeach IP address for any node(s) containing an MDM in your cluster.For example, change the line:
From:
#mdm 10.20.30.40
To:
VxFlex OS system changes
Manual SDC configuration/verification 19
mdm 10.108.158.48,10.108.158.49
where .48 is the master node, and .49 is the slave node. The Tie-Breaker nodeshould not be included.
l Use the drv_cfg binary or drv_cfg.exe on Windows to re-attach the MDM.
l Rescan all volumes using the drv_cfg utility.
Jumbo Frames and the potential impact on performanceWhen enabling jumbo frames, one can expect approximately 10% improvement inperformance if all network components fully support jumbo frames. If some networkcomponents do not fully support jumbo frames, it is recommended to use the defaultsetting; mtu 1,500.
Prior to activating mtu settings on the logical level, set Jumbo frames = mtu 9000 onthe physical switch ports that are connected to the server. Failure to do so may leadto network “disconnects” and packet drops.
Refer to your relevant vendor guidelines on how to configure jumbo frame support.
Jumbo Frame configuration for LinuxConfigure Jumbo Frames for NIC cards in the Linux-based VxFlex OS servers.
Perform the following steps, for all the NIC cards in the VxFlex OS system:
Procedure
1. Run the ifconfig command to get the NIC information.
2. Depending on the OS, run the command:
Operating System Command
RHEL/CentOS Edit /etc/sysconfig/network-scripts/ifcfg-<NIC_NAME>
SLES Edit /etc/sysconfig/network/ifcfg-<NIC_NAME>
3. Add parameter mtu=9000 to the file.
4. To apply the changes, type: service network restart5. Execute ifconfig again to verify that the settings have been changed.
6. To test the command, type: ping –M do –s8972<DESTINATION_IP_ADDRESS>
Output should look similar to this:
VxFlex OS system changes
20 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
Figure 1 Test ping to ensure mtu setting
Jumbo Frame configuration for WindowsConfigure Jumbo Frames for NIC cards in the Windows-based VxFlex OS servers.
Perform the following steps, for all the NIC cards in the VxFlex OS system:
Procedure
1. Change the Maximum Transmission Unit (mtu) setting to 9,000, orthe highest value that is supported by the switch and the connected nodes.
2. Determine the appropriate NIC name by typing the command:
netsh interface ipv4 show interface
Output similar to the following appears:
Figure 2 Show interface output
In this example, index 17 is the appropriate network.
3. Type the command:
netsh interface ipv4 set subinterface <network_ID> mtu=9000 store=persistent
where, network_ID is the ID from the output in the previous step; in this case,the ID is 17.
VxFlex OS system changes
Jumbo Frame configuration for Windows 21
4. In the Advanced tab of the Adapter Properties dialog for your vendor anddriver, change the value of Jumbo Packet to 9000, as illustrated in thefollowing figure:
Figure 3 Adapter properties
5. Click OK.
The network connection may disconnect briefly during this phase.
6. Verify that the configuration is working, by typing the command:
ping –f –l 8972 <Destination_IP_Address>
Output similar to the following should be displayed:Figure 4 Ping output
Note
Ensure that the switch supports 10 GB Ethernet.
Jumbo Frame configuration for ESXiConfigure Jumbo Frames for NIC cards in the ESXi-based VxFlex OS servers.
Perform the following steps, for all the NIC cards in the VxFlex OS system:
VxFlex OS system changes
22 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
Procedure
1. Change the Maximum Transmission Unit (mtu) setting to 9,000 on thevSwitches and on the SVM (be sure to make the change in /etc/sysconfig/network/ifcfg-ethX):
a. Type the command:
esxcfg-vswitch -m 9000 <vSwitch>
b. Create VMKernel with jumbo frames support by typing the followingcommands:
a. esxcfg-vswitch -d
b. esxcfg-vswitch -A vmkernel# vSwitch#
c. esxcfg-vmknic -a -i <ip address> -n <netmask> -m 9000 <portgroup name>
Note
Changing Jumbo Frames on a vSwitch will not change the VMKernel MTU size.For older vCenter versions, check to ensure the MTU setting has been changed.If not successful, users may need to delete and recreate the VMKernel. Fornewer vCenter versions, modify the MTU of VMKernel by using the vsphereweb client.
Jumbo Frame configuration for Storage Virtual MachineConfigure Jumbo Frames for NIC cards in the Storage Virtual Machine (SVM)-basedVxFlex OS servers.
Perform the following steps, for all the NIC cards in the VxFlex OS system:
Procedure
1. Edit the /etc/sysconfig/network/ifcfg-<NIC_NAME>.
2. Add parameter mtu=9000 to the file.
3. To apply the changes type:
service network restart
4. Execute ifconfig command again to confirm that the settings have beenchanged.
5. To test the command type:
ping –M do –s 8972 <DESTINATION_IP_ADDRESS>
Output should look similar to the following:
VxFlex OS system changes
Jumbo Frame configuration for Storage Virtual Machine 23
Figure 5 Test ping to ensure mtu setting
VxFlex OS system changes
24 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
CHAPTER 4
OS system tuning recommendations
This section presents options for fine-tuning VxFlex OS performance based onoperating system.
l Optimizing ESXi................................................................................................. 26l Optimizing Linux................................................................................................ 26l Optimizing the Storage Virtual Machine.............................................................27l Optimizing VM guests........................................................................................ 28l Optimizing Windows.......................................................................................... 30
OS system tuning recommendations 25
Optimizing ESXiTo improve I/O concurrency, users may increase the per device queue length value ona per data store basis. Per device queue length is referred to as “No of outstandingIOs with competing worlds” in the display output.
Use the following command to increase the queue length;
esxcli storage core device set -d <DEVICE_ID> -O <Outstanding IOs>
where, <Outstanding IOs> can be a number ranging from 32-256 (the default=32).
Example:
esxcli storage core device set -d eui.16bb852c56d3b93e3888003b00000000 -O 256
Optimizing LinuxWhen using the SSD devices, it is recommended that the I/O scheduler of the devicesbe modified.
Type the following on each server, for each SDS device:
echo noop > /sys/block/<device_name>/queue/scheduler
For example:
echo noop > /sys/block/sdb/queue/scheduler
Note
To make these changes persistent after reboot, either create a script that runs onboot, or change the kernel default scheduler via kernel command line.
Change the GRUB template for Skylake GPUsFor Skylake GPUs on PowerEdge 14G models running RHEL 7.2 or later, change theGRUB template on every node to ensure that the CPUs maintain good performance interms of latency.
Procedure
1. Open the GRUB template for editing:
vim /etc/default/grub
OS system tuning recommendations
26 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
2. Find the GRUB_CMDLINE_LINUX configuration option and append the followingto the line:
intel_idle.max_cstate=0 processor.max_cstate=1 intel_pstate=disable
Example:
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb intel_idle.max_cstate=0 processor.max_cstate=1 intel_pstate=disable quiet
3. Compile the new GRUB:
grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
4. Stop and then disable tuned:
systemctl stop tunedsystemctl disable tuned
5. Run reboot to reboot the node.
6. Ping the node to ensure that the GRUB change was implemented.
The ping time should be in the 0.03 ms range.
7. Repeat the procedure on every 14G node with a Skylake CPU.
Optimizing the Storage Virtual MachineProcedure
1. Double the CPU and memory assigned to the Storage Virtual Machine (SVM).
l In general, 8 vCPUs and 4 GB of memory are sufficient, but this may vary inyour environment.
l 8 vCPUs should use 1 virtual socket and 8 cores per socket.
2. From the Resources tab of the Virtual Machine Properties window, selectReserve all guest memory (All locked).
The Virtual Machine Properties window is displayed:
OS system tuning recommendations
Optimizing the Storage Virtual Machine 27
Figure 6 Virtual Machine Properties
Optimizing VM guests
I/O schedulerWhen using SSD devices, it is recommended that you modify the devices' I/Oscheduler.
Type the following on each server, for each SDS device:
echo noop > /sys/block/<device_name>/queue/scheduler
Example:
echo noop > /sys/block/sdb/queue/scheduler
Note
For most Linux distributions, NOOP is not the default. Different Linux versions havedifferent default values. For RHEL7 and SLES 11/12, the default value is deadline, butfor older versions, the default is CFQ.
OS system tuning recommendations
28 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
Paravirtual SCSI controllerThe Paravirtual SCSI (PVSCSI) controller should be used on guest VMs for highperformance. It is important that users choose the correct PVSCSI controller, becausechoosing the wrong controller can adversely affect performance.
Current PVSCSI queue depth default values are 64 for the device and 254 for theadapter. Users can increase the PVSCSI queue depth to 256 for the device and 1024for the adapter inside a Windows virtual machine.
Windows virtual machine:
1. From the command line run:
REG ADD HKLM\SYSTEM\CurrentControlSet\services\pvscsi\Parameters\Device /v DriverParameter /t REG_SZ /d "RequestRingPages=32,MaxQueueDepth=254"
2. Reboot the virtual machine.
3. Verify the successful creation of a registry entry:
a. Open the registry editor by running the REGEDIT command from the commandline.
b. Browse to HKLM\SYSTEM\CurrentControlSet\services\pvscsi\Parameters\Device.
c. Verify that the DriverParameter key exists with a value ofRequestRingPages=32, MaxQueueDepth=254.
Linux guests:
1. Edit the vmw_pvscsi.conf file:
echo "options vmw_pvscsi cmd_per_lun=254 ring_pages=32" > /etc/modprobe.d/vmw_pvscsi.conf
2. Re-run vmware-config-tools.pl:
vmware-config-tools.pl
You can add up to 4 PVSCSI controllers per guest. Allocating different VxFlex OSvolumes to different PVSCSI controllers can help realize the maximum potential ofguest performance.
It is strongly recommended that you review this VMware Knowledge Base article(article 2053145) so that you can make educated decisions regarding PVSCSI values.
Queue lengthTo improve I/O concurrency, users may increase the per-device queue length value ona per-data-store basis. Per-device queue length is referred to as "Number ofoutstanding I/Os with competing worlds" in the display output.
Procedure
1. Increase the queue length:
esxcli storage core device set -d <DEVICE_ID> -O <Outstanding IOs>
OS system tuning recommendations
Paravirtual SCSI controller 29
Where <Outstanding IOs> can be a number ranging from 32-256 (default=32).
Example:
esxcli storage core device set -d eui.16bb852c56d3b93e3888003b00000000 -O 256
2. Ensure that the settings remain persistent after a reboot:
#localcli --plugin-dir /usr/lib/vmware/esxcli/int/ boot storage restore –paths
Note
If you do not run this command, the queue depth setting will revert back to 32upon reboot.
Optimizing WindowsThe delayed acknowledgment feature causes very high latencies for low IO rates. It ishighly recommended that the “delayed ack” feature be disabled on every networkinterface.
To change each SDS interface, excluding the management IP address, edit thefollowing registry entries:
l HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Services \ Tcpip \Parameters \ Interfaces \<SAN interface GUID>
l Entries: TcpAckFrequency TcpNoDelay
l Value type: REG_DWORD, number
l Value to disable: 1
Note
Depending on the Windows version, the method for changing delayed ack varies.Refer to your relevant vendor guidelines for specific details.
Ensure that the customize power plan is set to High Performance, as shown inthe following figure:
OS system tuning recommendations
30 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
Figure 7 Customize Power Plan
OS system tuning recommendations
Optimizing Windows 31
OS system tuning recommendations
32 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
CHAPTER 5
Reference material
This section provides additional information that may be relevant to the tasksdescribed in this document.
l VxFlex OS Performance Parameters..................................................................34
Reference material 33
VxFlex OS Performance ParametersThe following table describes all values for the v2.x “default” and “high_performance”profiles and is applicable to installations on all of the platforms discussed in thisdocument.
Component
New Name (CLI) MinValue
MaxValue
Default HighPerformance
MDM mdm_net_alloc_rcv_buffer_wait_ms 100 10000 500 500
MDM mdm_net_break_do_io_loop 0 100 0 5
MDM mdm_number_sdc_receive_umt 1 100 5 5
MDM mdm_number_sds_receive_umt 1 100 10 10
MDM mdm_number_sds_send_umt 1 100 10 10
MDM mdm_number_sds_keepalive_receive_umt
1 100 10 10
MDM mdm_sds_capacity_counters_update_interval
1 120 1 1
MDM mdm_sds_capacity_counters_polling_interval
1 120 5 5
MDM mdm_sds_volume_size_polling_interval
1 120 15 15
MDM mdm_sds_volume_size_polling_retry_interval
1 120 5 5
MDM mdm_number_sds_tasks_umt 1 2048 1024 1024
MDM mdm_initial_sds_snapshot_capacity 1 10*1024 1024 1024
MDM mdm_sds_snapshot_capacity_chunk_size
1 50*1024 5*1024 5*1024
MDM mdm_sds_snapshot_used_capacity_threshold
1 99 50 50
MDM mdm_sds_snapshot_free_capacity_threshold
101 1000 200 200
MDM mdm_number_sockets_per_sds_ip 1 8 1 2
MDM mdm_sds_keepalive_time 2000 3600000 5000 5000
SDS sds_number_network_umt 2 16 4 8
SDS sds_tcp_send_buffer_size 4 128*1024 0(dynamic)
4*1024
SDS sds_tcp_receive_buffer_size 4 128*1024 0(dynamic)
4*1024
Reference material
34 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
Component
New Name (CLI) MinValue
MaxValue
Default HighPerformance
SDS sds_max_number_asyncronous_io_per_device
1 2000 8 60
SDS sds_number_sdc_io_umt 100 500 100 500
SDS sds_number_sds_io_umt 100 500 100 500
SDS sds_number_sds_copy_io_umt 100 164 164 164
SDS sds_number_copy_umt 100 164 164 164
SDS sds_number_os_threads 1 32 4 8
SDS sds_number_sockets_per_sds_ip 1 8 1 4
SDS sds_net_break_do_io_loop 0 100 0 5
SDS sds_number_io_buffers 1 10 2 5
SDS sds_net_alloc_rcv_buffer_wait_ms 100 10000 1000 1000
SDC sdc_tcp_send_buffer_size 4 128*1024 512 4*1024
SDS sdc_tcp_receive_buffer_size 4 128*1024 512 4*1024
SDC sdc_number_sockets_per_sds_ip 1 8 1 2
SDC sdc_number_network_os_threads 1 10 2 8
SDC sdc_max_inflight_requests 1 10000 100 100
SDC sdc_max_in_flight_data 1 10000 10 10
SDC sdc_number_io_retries 1 100 12 12
SDC sdc_volume_statistics_interval 1000 600000 5000 5000
SDC sdc_optimize_zero_buffers 0(FALSE)
1 (TRUE) 0(FALSE)
0(FALSE)
SDC sdc_number_unmap_blocks 1 200 100 100
SDC sdc_number_non_io_os_threads 1 16 3 3
Reference material
VxFlex OS Performance Parameters 35
Reference material
36 VxFlex OS 2.x Performance Fine-Tuning Technical Notes
top related