vmax—performance boons— ‘the fast bea(s)t’ · 1.3 working of emc symmetrix optimizer...
TRANSCRIPT
Siva Shankari ChandrasekaranSystems EngineerTata Consultancy Services
VMAX—PERFORMANCE BOONS— ‘THE FAST BEA(S)T’
2015 EMC Proven Professional Knowledge Sharing 2
Table of Contents
Symmetrix VMAX Performance Optimization – An Overview ..................................................... 3
1. Optimizer ............................................................................................................................ 4
1.1 Hard Disk Architecture ................................................................................................. 4
1.2 Need for Optimizer ....................................................................................................... 5
1.3 Working of EMC Symmetrix Optimizer ......................................................................... 6
1.4 Optimizer Functionality ................................................................................................ 7
1.5 Swap Configuration Considerations ............................................................................. 9
2. Virtual LUN Migration .........................................................................................................10
2.2 States of Migration ......................................................................................................11
3. FAST Disk Provisioning [FAST v1] .....................................................................................17
3.1 FAST Terminologies ...................................................................................................19
3.2 FAST Parameters .......................................................................................................21
3.3 FAST Algorithms .........................................................................................................21
4. Federated Tiered Storage ..................................................................................................23
4.1 Introduction .................................................................................................................23
4.2 FTS Setup ..................................................................................................................23
4.3 Benefits of FTS ...........................................................................................................24
4.4 Various Modes of FTS ................................................................................................24
5. FAST Virtual Provisioning [FAST v2] ..................................................................................26
6. Conclusion .........................................................................................................................27
Disclaimer: The views, processes or methodologies published in this article are those of the
author. They do not necessarily reflect EMC Corporation’s views, processes or methodologies.
2015 EMC Proven Professional Knowledge Sharing 3
Symmetrix VMAX Performance Optimization – An Overview
Information storage infrastructure provides the most efficient methods for creating, storing,
retrieving, and managing mission-critical information. It evolves with modern technologies in all
aspects to provide a complete and sophisticated service. Performance is a critical factor in
evaluating the information infrastructure. The cost incurred to enhance the performance is hard
to pinpoint as it depends on the growth of infrastructure, complexity of data, and the capability of
hardware components. Wouldn’t it be great, if we could increase the performance of data
retrieval and management with the same hardware components we have, using intelligent
software bundles? EMC Symmetrix® has a couple of software features which increase the load
balancing capability, throughput, and performance of Symmetrix arrays using the existing
hardware setup.
This article begins with a detailed discussion on EMC Symmetrix Optimizer which is a
performance optimization tool provided at DMX Architecture level. It then gradually extends to
Fully Automated Storage Tiering (FAST®), a feature provided at VMAX architecture level and
above. FAST provides efficient I/O load balancing and performance optimization features across
different storage disk drive types automatically by being proactively intuitive. We then discuss
another advantageous feature – Federated Tiered Storage, which enables the customer to
leverage the capacity and data of retired storage arrays. FAST VP features can be applied on
this Federated Tiered Storage as well. The article also covers Virtual LUN migration concepts,
which act as a foundation for the above mentioned performance optimization techniques.
2015 EMC Proven Professional Knowledge Sharing 4
1. Optimizer
This section details the objectives, functionality, and features of Optimizer.
The major objective of the optimizer is to performance tune the array at the disk device level. A
basic idea of the hard disk architecture is essential to understanding the need for EMC
Symmetrix Optimizer and its functioning.
1.1 Hard Disk Architecture
Data can be stored in a variety of storage media. Disk drives are the most popular storage
medium used for storing and accessing data for performance-intensive applications. The
components of Disk drive are Platter, Spindle, Read/Write Head, Actuator Arm Assembly, and
drive controller board.
Platters are the flat circular disks, where the data is written in 0s and 1s. A set of platters are
sealed in a Head Disk Assembly (HDA). A single platter is a round disk coated with magnetic
material on both sides, on which data can be written to/read from.
Spindle connects the set of platters to a motor. The motor rotates in constant speeds. There
are different spindle speeds available upon which the read/write performance depends. The
greater the speed, the less time is taken for the I/O activity.
Read/Write Head reads or writes data from or to the platters. There are two RW heads per
platter, one for each surface.
Actuator Arm Assembly is where the R/W head is mounted. It positions the R/W head at the
location on the platter where the data has to be written or read.
Drive controller Board controls the R/W operation by moving the actuator arm across different
R/W heads, and performs data access optimization.
1.1.1 Physical Disk Structure
‘Tracks’ are the concentric rings on the platter around the spindle, where data is placed. These
are numbered from the outer edge. Each track is divided into many smaller units called
‘sectors’. A ‘cylinder’ is a set of identical tracks on both surfaces of each drive platter.
2015 EMC Proven Professional Knowledge Sharing 5
Fig 1.1 Physical Disk Structure
A logical volume is a virtual storage space of one or more physical disk drives. It has unique
logical address.
Disk Drive Performance is calculated by various factors such as seek time, rotational latency,
and disk transfer rate.
Seek time is the amount of time taken by the disk arm to move and position the disk on the right
track. For example, if we are on track 100, and the next I/O needs track 200, the time taken for
the disk arm to move from track 100 to track 200 is the Seek time.
Latency time is the amount of time taken for the disk to rotate. It depends upon the disk’s
rotational speed. For example, the current track is 100, and for the next track to be read, the
disk has to rotate around 190 degrees. The amount of time taken for this rotation is the latency
time.
Transfer time is the amount of time taken to transfer the data from the disk for a read
operation, and to the disk for the write operation. It depends upon the data transfer rate, disk
bandwidth, and data structure.
1.2 Need for Optimizer
Data is randomly distributed across the disk. If a particular logical volume has more I/O hits,
then there is excessive usage of that volume which can cause wear and tear of the disk. Also,
there is an impact on the throughput. Such volumes are called active volumes or hot volumes.
There are logical volumes with minimal or no I/O activity at all. Such drives are called idle
volumes or cold volumes. Optimizer balances the I/O workload between the active and less
2015 EMC Proven Professional Knowledge Sharing 6
active logical volumes across the drives so that the seek time is reduced and throughput is
improved. Thus, the response time is optimized.
1.3 Working of EMC Symmetrix Optimizer
Symmetrix Optimizer works within Symmetrix physical disk groups. These disk groups are
specified in the IMPL bin file. The goal of the Optimizer is to minimize the average service time
of the disks within a single physical disk group. It analyses the statistics of the logical volume
activity and usage and from this analysis, it determines the list of busy volumes and idle
volumes and their physical locations. During analysis, it calculates the service time for each disk
and sorts out the disk activity with respect to its service time. Then, it narrows down the
potential swap candidates. With this, it comes up with a decision on which pair of volumes can
be swapped to enhance the performance. This list is called a ‘swap list’. Optimizer uses special
volumes called Dynamic Reallocation Volumes (DRV) to swap the logical devices and their data
internally. Thus, it protects the data and ensures constant data availability.
This backend migration of logical volumes along with its data is completely transparent to the
end-user and acts upon the time window specified by the user. Optimizer continuously monitors
the access patterns of the I/O operation, maintains history, analyses the pattern, and generates
the migration plan accordingly.
Fig 1.2 How Optimizer Works
2015 EMC Proven Professional Knowledge Sharing 7
1.4 Optimizer Functionality
A database is built from the information on the access pattern acquired by the optimizer. From
this database, Optimizer can determine the busy and idle devices and their physical location.
There are a few strategies based on which a valid swap pair is zeroed out. They are
1. Load balancing – Swap highly active volumes with idle volumes.
2. Minimizing seek time – Relocate the scattered active volumes and put them together. This
reduces I/O service time, as there is a decrease in the seek time.
3. Using faster media – Relocate the busy volumes in the faster zones; that is, to the outer
zones of the disk. Accessibility of this zone of the disk is relatively faster.
1.4.1 Parameters required by Optimizer
a. List of devices on which optimizer can act
b. Their priority levels
c. Time window to monitor the access pattern
d. Time window to perform back-end migration
e. Pace of the copy operation
1.4.2 Other Requirements
1. Two equally-sized DRV volumes are required for the move/swap activity.
2. Open mirror position for attaching the DRV volume, if required.
3. Background data migration of data is analogous to the online configuration change. Hence,
Optimizer holds the configuration lock
1.4.3 Optimizer Swap Procedure
The swap procedure involves four steps:
Step 1: Identify the volumes to be swapped (after analyzing the statistics).
Step 2: Copy Volume A’s data to DRV1 and Volume B’s data to DRV2, after which the two
physical volumes A and B are marked as Not_Ready and their attributes are swapped. Host
activities to the two physical volumes are redirected to the DRVs and to their original mirrors.
Step 3: Copy DRVs to the new location after swapping the attributes of volumes A and B. Then,
make the volumes ready.
Step 4: Split the DRVs.
2015 EMC Proven Professional Knowledge Sharing 8
Optimizer never fails the configuration rules defined by the microcode. Also, Optimizer activity
never degrades the protection level. It always improves or maintains the same protection level.
1.4.4 Time windows
Optimizer works on the schedule specified by the time window settings. There are two different
types of time windows that could be set for two different behaviors. One is for the performance
data collection and another for the device movement. These time windows can be inclusive or
exclusive. The inclusive time window specifies the duration in which a particular behavior should
happen. The exclusive time window specifies the duration from which the behavior should
abstain.
Performance Time Window (Inclusive): The performance time window is meant for
recognizing the business cycle for the storage array. This categorizes the distinct periods of
important system activity and the idle time of the system, capturing disk utilization during peak
hours.
Swap time window (Inclusive): This window specifies the time when the device swaps are
allowed. Swap time windows are specified in such a way that they do not affect the regular work
load.
1.4.5 Optimizer Controller Settings
There are many Optimizer settings which govern the functioning of the Optimizer. They are:
Data movement mode – it can be either automatic or user approved. Default is
Automatic.
Maximum moves per day – restricts the number of moves per day. Range is between 2
to 200. Default is 200.
Maximum simultaneous moves – specifies the maximum number of simultaneous
moves that are allowed. Range is between 2 and 8. Default is 8.
Workload analysis period – defines the amount of historical statistical information to be
maintained. This is used to generate the swap list. It can be between 1 hour and
4weeks. Default is 1 week.
Initial period – defines the minimum amount of data Optimizer should wait to collect
before generating its first swap list. It can be between 1 hour and 4weeks. It cannot
exceed the workload analysis period value. Default is 1 week.
2015 EMC Proven Professional Knowledge Sharing 9
There are manual operations allowed by the optimizer as well. They are manual swaps and
manual roll backs. In the manual swap, we can specify the swap list, which will be validated by
the Optimizer before scheduling for swap. Roll back operation reverses the device swap
operation performed earlier. So, it has to be picked from the swap history and should be
specified according to the timeline.
Optimizer can be accessed from the desktop client /SMC /Unisphere® /SYMCLI. There is an
optimizer server running in the service processor of the Symmetrix.
1.5 Swap Configuration Considerations
RAID1, RAID5, and RAID6 devices are eligible swap candidates. Devices such as BCV, DRV,
CKD Stripe, SFS, Time Finder® Snap devices, TDEVs, TDATs, and unprotected volumes are
excluded from swap list. There are certain considerations which Optimizer follows while picking
the device pairs. The pairs cannot be present on the same disk, cannot belong to the same DA,
cannot belong to the same backend port, cannot reside on the same disk loop, and cannot
reside on the same dual initiator. As well, RAID members of the device should reside on the
same affinity group.
Conclusion
Though Optimizer provides performance optimization benefits proactively, its functionalities are
limited within disk groups. We have successive software features made available by EMC from
Symm 8 (VMAX®), which has additional functionalities. These are not limited within disk groups,
but can do performance optimization across disk groups and disk types with added intelligence.
These are discussed in detail in the following sections.
2015 EMC Proven Professional Knowledge Sharing 10
2. Virtual LUN Migration
Storage tiering manages both ‘capacity-based’ and ‘performance-based’ storage. Performance-
based storage like Enterprise Flash Drives are expensive but very efficient. With storage tiering,
the storage disks can be optimally used to enhance performance by assigning performance-
based storage for the most frequently accessed data and capacity-based storage for less
frequently accessed data.
Symmetrix Virtual LUN migration technology allows transparent non-disruptive data migration
across storage tiers in a storage array between devices of different RAID types. It supports two
different types of migration – to configured space; and to un-configured space. If the migration is
to configured space, an existing and unused Symmetrix device is specified as a target, and the
source device is migrated to the target device. Once migration is done, the target acquires the
source’s storage space, and the target’s original data become inaccessible. The size of the
source and target device should match. In the case of migration to un-configured space, a target
RAID group is created from the free available space in the array, and migration is done. After
migration, the storage space initially occupied by the source is freed up and returned to the free
pool. The demotion of the protection level is not permitted in VLUN migration either, as in the
case of Optimizer. That is, a higher level protection device will not be migrated to a lower level
or unprotected space.
Symmetrix optimizer license key, ‘symmigrate’ command provided by Solutions Enabler 7.0 and
Enginuity 5874 or above, is required for this feature to work.
The source volumes can be specified as a storage group or device group or in a file format. The
target is specified in terms of physical disk group along with RAID protection type. If migration is
to a configured space, corresponding devices are selected from the specified disk group and
RAID-type criteria.
Migrations are managed as sessions with a label. Symmetrix supports a maximum of 16
sessions, both active and passive. An active session is one in which a migration session is
established but not yet completed. A passive session is one that has completed but yet to be
terminated. A single engine can support a session of a maximum of 128 devices.
2015 EMC Proven Professional Knowledge Sharing 11
2.2 States of Migration
Regardless of migration type, there are six different steps of migration. They are:
1. CreateInProg – this states the initiation of a migration request which is in progress
2. SyncInProg – data is being moved from source to target after creation
3. Synchronized – data has been copied from source to target
4. MigrInProg – this is the state when the target RAID group is migrated to primary
5. Migrated – migration has completed and is ready for termination
6. Failed – denotes failure of the migration
The following sections detail migrations to configured and unconfigured space.
2.2.1 Migration to Unconfigured Space
Suppose that data is being migrated from a RAID-protected source device to the unconfigured
space in the disk drive. The protection type required for the target is specified. From the
available free space, a volume of specific RAID protection type is created, attached to the
source volume as a mirror, and migration starts. Once migration is done, whatever space is
occupied by the source is freed up and returned to the free pool.
For instance, in the following scenario, migration between a RAID-1 device to unconfigured
space is considered. The target protection is specified as RAID-5.
This migration can be viewed in five stages:
STAGE 1 – Prior to Migration
Here, the volume 00B0 is considered for migration of protection type RAID-1.
Fig 2.1.1 Prior to Migration
As seen in the figure, Mirror position 1 is occupied, and rest of the mirror positions are free.
2015 EMC Proven Professional Knowledge Sharing 12
STAGE 2 – Secondary Mirror position occupied by the newly created Target Device
Fig 2.1.2 Secondary Mirror Attached
Migration state would be ’CreateInProg’. In this stage, an online configuration lock has been
placed on the array. After attaching the target device to the mirror position, this lock is released.
STAGE 3 – Synchronization between M1 and M2
Fig 2.1.3 Synchronization between Primary and Secondary
Migration state would be ‘SyncInProg’. Once synchronization is done, it will change to
‘Synchronized’.
2015 EMC Proven Professional Knowledge Sharing 13
STAGE 4 – Swapping M1 and M2
Fig 2.1.4 Swapping M1 and M2
After synchronization, mirror positions M1 and M2 are swapped. Thus, M1 (RAID-1) becomes
secondary and M2 (RAID-5) becomes primary. Configuration lock is again acquired on the
device for this swapping.
STAGE 5 – Detaching the Secondary
After swapping primary and secondary, the secondary is detached from the volume (i.e) mirror
position 1 and the RAID group is deleted. The freed up space is returned back to the free pool.
Fig 2.1.5 Secondary Mirror is detached
2015 EMC Proven Professional Knowledge Sharing 14
2.2.2 Migration to configured space
Another type of VLUN migration is where the migration is done to a configured space. This
configured space is a Symmetrix volume which is not mapped to any host. It is a logical device
which is not in use currently, but is configured. Hence, after migration, the data present (if any)
in the target will be overwritten with the source device, resulting in the loss of data.
The following scenario depicts migration of a RAID-1 [00B0] device to a RAID5 [00E0] device.
This migration can be viewed in five stages:
STAGE 1 – Prior to Migration
Here, volume 00B0 is considered for migration of protection type RAID-1.
Fig 2.2.1 Prior to Migration
As seen in the figure, Mirror position 1 is occupied and the remainder are free.
STAGE 2 – Secondary Mirror position occupied by the newly created Target Device
Fig 2.2.2 Secondary Mirror Attached
2015 EMC Proven Professional Knowledge Sharing 15
In the above figure, it can be seen that, a configuration lock is made on the devices and the
target’s primary mirror (RAID-5) is made not ready. It is being attached to volume 00B0 as
secondary. Then, the configuration lock is released.
STAGE 3 – Synchronization between M1 and M2
Fig 2.2.3 Synchronization between Primary and Secondary
After attaching the primary mirror, synchronization of 00B0’s primary mirror and 00E0’s primary
mirror (which is the secondary mirror of 00B0 now) occurs. Migration state would be
‘SyncInProg’. Once synchronization is done, it will change to ‘Synchronized’.
STAGE 4 – Swapping M1 and M2
Fig 2.2.4 Swapping M1 and M2
2015 EMC Proven Professional Knowledge Sharing 16
After synchronization, mirror positions M1 and M2 are swapped. Thus, M1 (RAID-1) becomes
secondary and M2 (RAID-5) becomes primary. Configuration lock is again acquired on the
device for this swapping.
STAGE 5 – Detaching the Secondary
After swapping primary and secondary, the secondary is detached from the volume, i.e. mirror
position 1, and the RAID group is deleted. The freed up space is returned to the free pool.
Fig 2.2.5 Secondary Mirror is detached
After the secondary mirror is detached from 00B0, the target volume 00E0 is instantly VTOCed
to make the data inaccessible.
Conclusion
The command ‘symmigrate’ is used to perform this VLUN migration manually. This acts as a
base for the FAST features of VMAX which will be covered in the following sections.
2015 EMC Proven Professional Knowledge Sharing 17
3. FAST Disk Provisioning [FAST v1]
The previous sections described how Optimizer provides performance enhancements to DMX
arrays and above, within disk groups. As well, we’ve shown how VLUN Migration provides
flexibility, i.e. across configured and unconfigured space.
EMC’s Fully Automated Storage Tiering (FAST) product works in an enhanced manner with
Virtual LUN migration as its core technology. The architecture and working model of FAST is
discussed in the following sections. The tagline of FAST is – ‘Right data at the right place, in the
right time’.
In traditional environments, customers may prefer an array full of Fibre Channel disks due to its
affordability (when compared to EFD) and better performance (when compared to SATA).
However, FAST technology promises to provide enhanced performance in a cost-effective way.
It uses mixed disk drive types in the array, combining the advantages of each disk type, while
mitigating their drawbacks. For example, if we replace an array full of fiber disk drives with a
mixed array of SATA drives, FC drives, and EFD, we can reduce the footprint in the first place –
as EFD drives are space- and performance-efficient. Later sections in this article discuss how
FAST promises increased performance efficiency.
When customers plan for storage, they first consider the performance requirement, followed by
the capacity requirement. If they plan to use an all-FC hard disk drive configuration, they must
consider the throughput provided by the FC hard disk drive, which depends upon its RPM value.
This adds to the disk requirements to increase the IOPS throughput. They then have to think of
the capacity requirement. EFD drives do not have moving parts and provide better throughput in
shorter response time. Thus, for the sake of performance, the customer need not increase the
disk requirement, which is the case with FC disk drives. Thus, EFD drives reduce the capacity
requirement by at least 30%.
2015 EMC Proven Professional Knowledge Sharing 18
Fig 3.1 FAST Configuration
Figure 3.1 shows how using mixed drive types with FAST implementation can drastically reduce
the number of disks from 1000 to 150. At the same time, performance is greatly increased by
35-40%, footprint is reduced by 60-70%, and cost incurred is reduced by 40%.
The logic behind FAST is to dynamically tier data across storage tiers. That is, the most
frequently accessed data is kept in the EFD tier for better access. The least frequently accessed
data is kept in the SATA tier, which gives relatively lesser performance. Moderately accessed
data is placed in the FC tier. This logic is implemented automatically and dynamically with
FAST.
FAST v1 is implementation of FAST at thick device level, and FAST v2 is at thin device level.
Hence, v1 is called disk provisioning, and v2 is called virtual provisioning.
In the FAST environment, disks are categorized as high performance and high capacity. EFD is
called a high performance disk due to its efficient throughput whereas SATA is called a high
capacity disk as it is very inexpensive when compared to EFD, but provides relatively less
throughput. FAST continuously monitors the I/O activities and moves the disks across the high
capacity and high performance disks according to the demand of data. This promotion/demotion
of LUNs is driven by the policies. Storage groups are created from a variety of disk types and
are assigned to these policies. Tiers are created that are characterized by distinct disk types
tagged with protection types.
2015 EMC Proven Professional Knowledge Sharing 19
3.1 FAST Terminologies
Storage Tier – this groups the different physical disks of same technology type and protection
type.
Storage Group – This is a component where we specify the list of LUNs on which the FAST
activity should be applied. Typically, it can be a set of devices used by an application or by a set
of applications.
FAST policy – this specifies the rule which is applied on the linked storage tiers.
Policy association – This is where the Storage group is associated with FAST policy. One
storage group can be associated with only one FAST policy. A priority value can be specified for
the storage groups on the assigned FAST policy.
SG percentage – SG percentage is specified for each tier. This SG percentage specifies the
upper limit of the percentage of storage each tier can provide. The percentage is calculated out
of the associated storage group’s capacity. The sum of the storage capacity provided by the
storage tiers should be equal to or greater than 100 percent. This ensures that the associated
storage group gets dispersed across the tiers.
For instance, tiers such as RAID-1 SATA [Tier-1], RAID-5 FC [Tier-2], and RAID-5 EFD [Tier-3]
can be created. These can be included in a FAST policy - Test_Policy with max_sg_percent
specified.
The major objective of implementing FAST is to utilize the EFD drives in an efficient way, as
they provide the maximum performance. EFDs take less than a millisecond for read/write
access. FC takes around 8 milliseconds. Meanwhile, SATA takes around 15 milliseconds.
However, SATA is relatively cheaper and consumes less power. The rate at which the data is
accessed is continuously monitored by FAST. With these statistics, it calculates the value of the
data, and places it in the corresponding tier accordingly. The latest Virtual LUN technology can
even move devices to the same technology type but different protection type.
2015 EMC Proven Professional Knowledge Sharing 20
Fig 3.2 Typical FAST Setup
As seen in Figure 3.2:
A tier is nothing but a collection of disks of similar technology, size ,and speed.
A Symmetrix tier can be shared across multiple policies. However, care should be taken
in providing sufficient space for tiering.
A FAST policy can have more than one SG associated to it, provided a priority value will
be tagged with every associated SG.
One SG cannot be shared with another FAST policy.
Each FAST policy can hold up to three tiers.
Users can choose the Symmwin disk groups on which the FAST policy has to be applied
and put them in the Symmetrix tiers. There can be disk groups in a Symmetrix which are
not at all involved in any of the FAST configuration.
A storage groups’ priority decides which storage group has more privilege in utilizing a
particular tier. For example, if one SG has priority ‘low’ and another has priority
‘medium’, the one with priority ‘medium’ will enjoy more privilege on EFD tiers than the
one with ‘low’.
A FAST policy should have a minimum of one tier and a maximum of three tiers. Each
tier should have the maximum SG percentage value assigned to it.
2015 EMC Proven Professional Knowledge Sharing 21
3.2 FAST Parameters
Mode of Operation – FAST can be either in automatic mode or user approved mode. In the
former mode, device movement is decided by the FAST algorithm and executed automatically,
whereas in the latter, it waits for user approval after deriving a movement plan.
Workload Analysis Settings – There are two values associated with this setting – initial period
and analysis period. Initial period specifies the amount of time FAST should wait before starting
its analysis. Analysis period specifies how much analysis statistics should be considered for a
movement activity.
Time Windows – There are two types of time windows; performance and move. The
performance time window specifies on which time period the statistics of IO activity should be
monitored. The move time window specifies the time period during which the movement activity
can happen. Always ensure that the movement activity does not hamper the routine workload.
Also, the performance time window should be the one which includes the time period of
maximum workload so that the statistics derived are really helpful in enhancing performance.
3.3 FAST Algorithms
There are two different algorithms based on which FAST works. FAST continuously and
periodically generates a configuration change plan by ideally applying an algorithm on a set of
devices depending upon their workload.
3.3.1 Performance-Based Algorithms
The performance-based algorithm helps achieving better performance by utilizing the faster
tiers. This has two algorithms.
EFD Promotion/Demotion Algorithm: It models different disk type performance and accordingly
suggests a configuration change plan. Busy devices are promoted to flash drives to achieve
higher performance while idle devices are demoted to other tiers. An EFD performance score
value is calculated for each device to decide performance efficiency improvement if the volume
is moved to EFD. This depends upon the workload on that particular device.
Inter-tier FAST FC/SATA: This algorithm works based on the disk service time. With this value,
source/target candidates are chosen. This is a continuous process.
2015 EMC Proven Professional Knowledge Sharing 22
3.3.2 Capacity-Based Algorithms
The capacity-based algorithm ensures that the storage group has its devices dispersed across
the Symmetrix tiers as specified by the user policy. If this compliance condition is not met, the
capacity-based algorithm takes action to bring it into the compliant state. This algorithm employs
swaps or moves to achieve compliance.
FAST prefers moves instead of swaps. But, if swap is enforced, or if there is no possibility for
move, then with the help of DRV devices, FAST swaps happen.
Conclusion
FAST swap/move activities follow the same pattern as in the case of Virtual LUN Migration.
FAST DP can be accessed from the desktop client/SMC/Unisphere/SYMCLI. Since FAST works
in nearly the same way as Optimizer, the concept and logic behind FAST are only discussed in
the section above.
2015 EMC Proven Professional Knowledge Sharing 23
4. Federated Tiered Storage
4.1 Introduction
Federated Tiered Storage (FTS) is a new feature of Enginuity 5876 which provides attaching
SAN-provisioned external storage arrays to the Symmetrix VMAX array. With this connectivity,
the externally provisioned storage can be monitored, replicated, and administrated as if it is
local storage. The added advantage here is, even if a feature is not supported on the externally
provisioned array in its Enginuity version, it can be easily applied through this VMAX-facilitated
connectivity.
4.2 FTS Setup
FTS does not require special hardware. The Enginuity code provides support for it. All that is
required is connectivity between the VMAX array and the external array. Cabling is the same
but the emulation required is different. The section below discusses the various terminologies
associated with FTS setup.
DX Directors: This is a new type of emulation required for FTS. This is a Disk Adapter
emulation, which facilitates connecting external disks. Other than this additional facilitation, the
DX directors has the capabilities of the traditional Disk Adapters we have in the Symmetrix
family of arrays.
eDisks: The LUN externally connected to the VMAX is seen by the VMAX as eDisks.
External Disk group: This is a disk group created in the VMAX exclusively for the externally
provisioned disks. The disk number starts at 512.
Virtual RAID group: eDisks are protected virtually. Their protection type is not specified by the
VMAX but by their external array. Hence, in VMAX, they are virtually added to this RAID group,
which is unprotected.
2015 EMC Proven Professional Knowledge Sharing 24
Fig 4.1 FTS Setup
4.3 Benefits of FTS
Both local and external array are managed by VMAX using its management tools.
Data replication and migration features, which are supported at the external array’s code
level/architecture level, can now be supported under VMAX provisioning.
Performance features like FAST VP are on the external array as well.
Either data on the external array or the space on the external array is used locally from
the VMAX array.
With external array provisioning, VMAX achieves additional storage facility. In addition,
the life and value of the external storage array is enhanced.
4.4 Various Modes of FTS
External Provisioning Mode: In external provisioning mode, the edisks are added in a specific
external disk group. LUNs can be created from this external disk group. The disks are VTOCed
before use. Hence, the data that has existed on the devices are lost.
2015 EMC Proven Professional Knowledge Sharing 25
Encapsulation Mode: There are two Encapsulation modes – Thick Mode and Thin Mode. In
both modes, when edisks are created, external disk group along with the virtual RAID groups
are created accordingly. In the case of thick mode, the devices are now ready for use, with the
data preserved. In the case of thin mode, Thin devices (TDATs) must be created in the VMAX.
TDAT pools are created out of the edisks and bound with the thin devices. Now, the data can be
accessed through the thin devices.
Conclusion
This section gave an overview on Federated tiered storage. With this knowledge, readers can
understand how an external tier is formed with this FTS configuration in FAST VP. This is
discussed in the next section.
2015 EMC Proven Professional Knowledge Sharing 26
5. FAST Virtual Provisioning [FAST v2]
FAST v1 and FAST v2 are functionally the same. The major difference between the two is that
v1 supports thick volumes whereas v2 supports thin volumes. FAST v2 makes use of the
advantages of the Symmetrix Virtual Provisioning features. Symmetrix virtual provisioning
provides efficient utilization of resources and scalability as and when required. Virtual
provisioning eases allocation and de-allocation of storage. Another major benefit is support for
federated tiered storage. FAST Disk Provisioning [v1] allows a maximum of three tiers
associated with a FAST policy. Meanwhile, FAST Virtual Provisioning can have one more tier
exclusively for the externally provisioned array. Thus, the performance benefits of the external
array can also be incorporated in the local VMAX.
The mode of operation, algorithms, tiering modes, and logic behind how they work are the same
in both FAST v1 and FAST v2. In FAST v1, thick devices are selected and put in storage
groups. They are associated with the FAST policy for tiering. In the case of FAST v2, thin and
data devices are created. Thin pool is created from the data devices, and thin devices are
bound to it. FAST setup is created, similar to FAST v1. Thin devices are put into the storage
group and are associated with the FAST policy.
2015 EMC Proven Professional Knowledge Sharing 27
6. Conclusion
This article discussed the various performance features provided by the Symmetrix software
bundles, in both DMX and VMAX architecture. As seen, while these features do not require any
special hardware, they greatly enhance the performance of the storage array. This article
provides strong evidence that FAST is a real performance boon for VMAX storage arrays.
EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION
MAKES NO RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.