iofabric architecture overview 1€¦ · iofabric vicinity 2 table of contents about iofabric...

12
ioFABRIC Vicinity www.ioFABRIC.com ioFABRIC Vicinity Architecture Overview This document provides a high level technical overview of the ioFABRIC Vicinity product and assumes readers are familiar with system and storage concepts and management. ioFABRIC Inc. Document Revision 1.8.123016

Upload: others

Post on 25-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

ioFABRICVicinity www.ioFABRIC.com

ioFABRIC Vicinity Architecture Overview This document provides a high level technical overview of the ioFABRIC Vicinity product and assumes readers are familiar with system and storage concepts and management.

ioFABRIC Inc. Document Revision 1.8.123016

Page 2: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

2ioFABRICVicinity www.ioFABRIC.com

Table of Contents ABOUT IOFABRIC VICINITY 3

TYPES OF DEPLOYMENT AND USE CASES 4UNIFY EXISTING INFRASTRUCTURE 4SIMPLIFY COMPLEXITY, DEPLOYMENT, AND MANAGEMENT 4SCALE PERFORMANCE 5MORE SERVERS WITH STORAGE 5EASY FLASH INTEGRATION 5MORE PATHS TO DATA 5SCALE CAPACITY 5DIRECT AND NETWORK STORAGE 5CLOUD 5STORAGE MIGRATION 6

TECHNOLOGY OVERVIEW 7COMPONENTS AND TERMINOLOGY 7HOW QUALITY OF SERVICE WORKS 8HOW VOLUMES WORK 10ACCESS METHOD 10NETWORK AND DEVICE LAYOUT 11SUPPORTED STORAGE DEVICES 12

Page 3: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

3 ioFABRIC Vicinity www.ioFABRIC.com

About ioFABRIC Vicinity

Vicinity is an open, programmable data fabric that uses the storage you have to provide

the data storage and management services you want, addressing the IT challenges of

storage complexity, growth, point solutions, and performance. It uses industry-unique,

policy-driven automation to deliver efficiency and business agility in the data center.

Vicinity aggregates the performance and capacity of any storage and delivers data

storage and management services based on volumes or files. The services are delivered

across multiple access points at consistent and continuous service levels. All this

happens dynamically as workloads or conditions change, allowing you to move or add

storage resources to address problems with no application downtime. In a Vicinity

environment, you can seamlessly add the newest storage technology with no manual

migration and with the assurance that only your most important data is using your most

expensive storage media.

This is automatically driven by our industry-unique Quality of Service (QoS)

management software. Rather than individually configuring performance media and

capacity on each storage silo, define what your applications need from storage in terms

of performance, capacity, and data protection on a single dashboard, and let Vicinity

provision and manage the service for you. This creates an adaptive storage environment

that maintains service levels as compute and data grow.

Page 4: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

4 ioFABRIC Vicinity www.ioFABRIC.com

Types of Deployment Vicinity will actively manage data to maintain service policies across any storage

systems or devices. This unique ability substantially lowers costs and is applicable in a

large number of use cases, including the following:

Unify, share, and manage all storage from a single interface

Vicinity provides a single integrated management interface for storage. It actively

manages data placement and data flows across diverse resources to maintain service

levels as experienced by applications. This design is in contrast to how IT storage

problems are typically solved on a case-by-case basis, with the solutions creating silos

that operate inefficiently and are managed independently, resulting in increased cost

and complexity. The Vicinity design unlocks these silos by monitoring and reacting to

the specific service level provided by each storage resource. Vicinity intelligently selects

which set of resources are appropriate to each application in real time, including where

data should be placed or migrated to among the resources, and where storage service

access points are provided. Vicinity enforces service levels delivered to applications by

automatically responding to changing workloads or resource behaviour. These tasks

become increasingly complex as the infrastructure grows and Vicinity’s active

management frees up your time for higher-level tasks.

Simplify complexity, deployment, and management

From a single interface, Vicinity provides application service levels using available

storage resources in a globally efficient and optimized manner. This works across any

hyperconverged, converged, containerized, or traditional storage systems, bringing the

advantages of the most efficient storage solutions to all environments. Vicinity presents

storage to each application the way it expects, and behind that manages all data the

same way.

Vicinity is a solution that does not lock you in to a single architecture or vendor for your

compute or storage environment. You can grow efficiency and agility across your

infrastructure, while Vicinity hides the complexities.

Hypervisors are an excellent way to simplify the management of compute and network

environments. Vicinity is a hypervisor for storage and can be combined with compute

hypervisors and network virtualization to create a full hyperconverged environment or

to unify storage across those and other environments.

Unify

Simplify

Page 5: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

5 ioFABRIC Vicinity www.ioFABRIC.com

Scale Performance

More servers with storage

Vicinity places data across storage resources to achieve the required service levels, so as

you add servers with storage resources, you increase the total capacity and

performance of your infrastructure. They can be scaled independently: if you want

more performance, add performance storage near the compute; if you want more

capacity, add it anywhere. Scale up servers cost-effectively with SSDs before scaling

them out or using an all-flash array.

Easy flash integration

Vicinity provides an easy and transparent way to integrate SSDs or flash arrays into the

environment. Add flash while applications are running and performance-starved

workloads will immediately use them for active data to maintain service levels.

Designated flash is used as write-preference storage to improve performance,

endurance, and overall efficiency.

More paths to data

There are two factors that affect total IOPS and bandwidth available from a storage

service: the number of access points the application is using, and the distribution of

data near each access point. Since Vicinity presents a shared disk architecture, you can

dynamically create access points to provide a consistent view of the data. Increasing the

number of access points linearly increases available performance. Each access point

ensures service levels are provided and actively corrects data placement to resolve

issues. This may result in data being migrated near newly created access points to

remain in their vicinity and thereby maintain performance policies.

Scale Capacity

Direct and network storage

Vicinity amalgamates capacity in the entire environment across any SAN, NAS, and

directly-attached storage. Vicinity uses capacity storage for stale data and snapshots,

which are less sensitive to performance constraints.

Cloud

Vicinity supports cloud storage, both as a general resource and as overflow for a

volume. The overflow capability allows stale data to migrate to the cloud when other

resources are full, and then back to reduce operating expenses when general capacity is

Scale

Page 6: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

6 ioFABRIC Vicinity www.ioFABRIC.com

increased. The overflow option allows near-peak efficiency without the risk of running

out of capacity.

Programmable data fabric

Vicinity presents simple interfaces to manage your data. The management GUI is built

using only our RESTful API, and a CLI and Python library are also available to help you

integrate data and storage management functionality into your DevOps code and

workflows.

Program

Page 7: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

7 ioFABRIC Vicinity www.ioFABRIC.com

Technology Overview Vicinity establishes a distributed data fabric with storage resources attached to physical

or virtual appliances, and uses them to deliver data storage and data management

services according to specified requirements for each application. Vicinity creates and

manages service levels by intelligently distributing and moving data among different

storage locations to place it in the vicinity of each application, which is the origin of the

product name. Such data continues to be available using traditional storage services

(e.g. iSCSI, SMB) and can be presented at one or more locations. Vicinity provides data

availability and data services behind the traditional storage services, making it easy to

use and share features provided by any storage devices (performance, capacity,

compression, protection, etc.). Driven by service policies and objectives, Vicinity’s

automation creates an environment where storage complexities and bottlenecks are

removed.

Components and terminology

At its core, Vicinity is an automation system with Quality of Service-level enforcement

for any storage device or system. The foundation is a highly dynamic distributed block

engine that uses any direct-attached storage, SAN, NAS, and cloud service to present

dynamic volumes. This section introduces our vocabulary and gives an overview of the

components of Vicinity and their relationships.

A Vicinity installation consists of one Vicinity controlserver and one or more Vicinity

dataservers, where servers are commonly referred to asnodes. Each node runs Vicinity

software on a real server, a virtual server, or a pre-packaged virtual appliance.

The controlnode is responsible for orchestrating and monitoring the data nodes,

including arranging for data services and enforcing service levels. It keeps information

about Vicinity volumes, services, and the state of the system, but it does not participate

in the data flow or data path, and does not have information or metadata about data.

The metadata is distributed across the datanodes. Vicinity comes with a graphical

management interface we call the Dashboard, also served by the control node for

convenience. The Dashboard and CLI control interfaces are built using a Python library

interface and a documented control plane interface, which is a RESTAPIserved by the

control node.

All data nodes interoperate regardless of platform or configuration, which means any

Vicinity-enabled environment can be easily extended to unify more resources and

simplify overall management. Each data node acts as a virtualcontroller for storage

resources, manages data service levels, and supports access to data. The control node

Page 8: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

8 ioFABRIC Vicinity www.ioFABRIC.com

configures a dedicated virtualarray for each volume that spans one or more virtual

controllers, containing only appropriate storage resources for the requested service

levels. The data nodes supporting a virtual array communicate with each other to

ensure the health and consistency of the volume and operate independently from the

control node until they need more performance or capacity resources than are

configured. At that point, the control node uses its global view of activity and resources

to provide an improved configuration.

Data nodes are also responsible for exposing accesspoints to Vicinity volumes, which

are addresses where the data in a volume can be accessed; for example, a device path

or an iSCSI target specification. Each volume can have access points on one or more

data nodes at the same time and the configuration can be changed while the volume is

in use. Volumes can be specified with SCSI semantics, so Vicinity can be used to convert

diverse commodity storage into a unified SCSI volume across multiple data nodes.

The control node is responsible for orchestrating storageservices that vastly simplify

the typical provisioning and configuration tasks in traditional storage infrastructure.

Orchestration may extend into servers that are not data nodes, to allow you to request

things like: “I want a file system to be mounted on server X”. This starts as a volume on

one or more Vicinity data nodes, and, through orchestration, it is turned into a specific

file system exported with a specific protocol to server X from a data node – or several

data nodes in case of high availability configurations – and then mounted on the server.

The relationship between server X and the Vicinity data node may be indirect; for

example, a Windows Server exporting an SMB 3.0 scalable file share built on shared

volumes provided by Vicinity.

An application is the only producer and consumer of persisted data from a storage

service, for example, a database, a web server, a VM, or a multi-user file share. When

the delivered storage service is a volume, the typical application (unless it is a database

that wants to use raw volumes) is often a storageapplication, such as a file system,

which presents itself as a storage service to its own applications.

How Quality of Service works

Vicinity uses policy-driven automation to maintain service levels, including traditional

Quality of Service (QoS). This means Vicinity takes corrective action to maintain a

particular service level for every storage service that Vicinity provides.

The service supervision happens at three levels:

1. A real-time analysis of the service level delivered given the resources available,

with immediate corrective action if necessary.

Page 9: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

9 ioFABRIC Vicinity www.ioFABRIC.com

2. A very frequent (seconds to minutes) analysis and, if necessary, correction of the

data flows and set of storage resources available to each service.

3. A scheduled (minutes to hours) check for other conditions that trigger actions.

The real-time corrective action is typically to migrate data that is not in compliance with

the required service level to some storage location where it will be in compliance. For

example, if the service level problem has to do with data protection, then data is

typically copied to maintain compliance, allowing it to heal around a storage device or

node failure. Vicinity’s actions to maintain service levels, including data migration or

healing, are completely transparent to applications.

A policy is a set of objectives and requirements that describe the service level

associated with a storage service when it is created. The minimal policy contains

performance, capacity, and data protection parameters. Objectives are metrics Vicinity

will strive to maintain but that may not be provided at all times, e.g. a performance

level, whereas requirements are provisioning characteristics that must be met by the

service, e.g. using encryption.

The three performance parameters Vicinity supports are latency, IOPS, and bandwidth,

of which only one at a time can be the primary optimization criteria. Each criteria has a

target value, which is the average performance value Vicinity will strive to provide, and

has two other parameters, minimum and maximum values, that affect storage resource

eligibility and performance throttling for the service. Changing these performance

parameters in a policy is the primary mechanism for adjusting the behaviour of a

storage service to, for example, make it faster or slower.

The capacity parameters include whether the capacity is fixed or not, and whether the

service is thin or thick provisioned. Thin provisioned storage services allocate storage

for data as it is first written, and thick provisioned storage services pre-allocate storage

upon creation.

The data protection parameters include the resilience factor which is the number of

copies of data, with the idea that the storage service will continue even if all but one

copy of the data are lost for any reason.

In addition, there are other storage features such as specific fault domain type

selection, encryption, or others, that are also available by policy.

Vicinity comes with predefined policies that are suitable for many environments. For

quick use, new storage services can start out relying on the predefined policies and you

can either assign a new policy to the storage service, or change the definition of the

policy that is already assigned.

Page 10: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

10 ioFABRIC Vicinity www.ioFABRIC.com

How volumes work

Volumes are the fundamental storage service provided by Vicinity and all other storage

services are orchestrated on top of volumes. A volume is a virtual disk created by

Vicinity using a dedicated virtual array. A volume can be exposed on one or more data

nodes with access points for the volume, and new access points can be dynamically

added to provide increased data availability or performance. Each data node in a virtual

array may provide an access point for the volume’s data.

Each volume is managed independently and uses a selected subset of storage resources

matching the policy for its data. Data is placed and migrated to adhere to all policies

including the fault domain policy that protects against node failure – two copies of data

cannot exist on the same node. The subset of eligible storage resources may be

changed at any time, allowing for adequate recovery time to ensure the health of the

volume.

Data copies within a volume move independently of each other. For example, high

performance reads only require one copy to be on high performance media; other

copies can be anywhere, but are typically on more economical (slower) media.

Alternatively, high performance writes require all copies to be on high performance

media, but on different nodes to ensure access and availability.

Volumes can be snapshotted, which creates a read-only image of a volume at a

particular point in time. Snapshots or volumes can also be cloned, which creates a copy-

on-write image.

Access method

Applications see Vicinity volumes and other storage services presented as they expect:

iSCSI volumes, native block devices, or as file systems on top of block devices. Vicinity

volumes are always ready for use as cluster shared disks with multiple access points, as

needed by clustered hypervisors and databases.

Client access tools for storage

ioFABRIC makes CLI tools available to IT admins to use directly or to provide to end-

users to make it very easy to self-service and provision data storage and data

management functions. This increases agility and empowerment and reduces the

workload on IT staff while IT retains control of central provisioning policies. These are

client tools only from the perspective of Vicinity, as they would usually be used on

application servers, home servers, or project-specific systems, to make storage

conveniently available.

Page 11: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

11 ioFABRIC Vicinity www.ioFABRIC.com

Network and device layout

TheminimalVicinity setup thatprovidesmediafaultprotection looks like Figure 1,

where Vicinity is deployed on a single virtual appliance. It runs as a combined control

and data node, which controls at least two storage resources to ensure fault protection:

Keep in mind that to use this configuration, the single node setup option must be

selected for every storage service you create.

TheminimalVicinitysetupthatprovidesnodelevelfaultprotection requires two data

nodes and a separate control node for quorum in split-brain situations. The control

node can reside anywhere in the network, even as a virtual appliance:

Page 12: ioFABRIC Architecture Overview 1€¦ · iofabric vicinity 2 table of contents about iofabric vicinity 3 types of deployment and use cases 4 unify existing infrastructure 4 simplify

12 ioFABRIC Vicinity www.ioFABRIC.com

AunifiedVicinitysetup looks like Figure 3, where Vicinity is deployed on many servers.

Each server can be different, including the variety of storage resources being managed.

In a hyperconverged configuration, each of the servers is a hypervisor:

Supported storage devices

All storage devices are supported. This includes: internal drives, external drives, external

disk arrays (SAN), JBODs, RAID groups, SSDs and other NVM (Flash), RAM disks, virtual

devices, local and remote (on NAS) files, and any other storage that appears as a file,

object, or disk device.