netapp solution brief - rethinking data protection for in-memory … · 2019-03-16 · solution...

3
Solution Brief As the newest waves of database technologies move into the mainstream, they are already positioned to become the standard for business applications. In-memory data- bases are rapidly gaining market share from the existing disk-bound relational database market. In-memory databases have proved to be the answer to the question of how today’s CPU-based and memory-based architectures can deliver both transactional and analytic capabilities. SAP HANA is the leading in-memory platform. It consolidates both transactional and analytical workloads into a single database platform. Beyond simply consolidating data in the database, however, SAP HANA’s compressed columnar storage model organizes the data very differently from the “classic databases” of old. That difference increases the internal data change rates and their associated backup volumes. One effect is that the task of managing incremental changes and backup volumes becomes both difficult and critical at once. This transformation pushes traditional backup concepts up to or even beyond their limits and requires rethinking approaches to data protection. Organizations must find new ways to perform these business-critical data protection tasks instead of returning to old backup methodologies that are inefficient and expensive. Can You Apply Traditional Backup Concepts to SAP HANA? Yes, you can still apply traditional backup concepts, but that decision comes at a noticeable cost. With the traditional approach, a backup agent next to the SAP HANA system collects and packages all the data, and the backup software transports the collected data to long-term storage. This concept works, and it is widely used. But it can work only as long as the data change rates are manageable, as they were with classic disk-bound databases. With columnar in-memory stores, however, this backup concept quickly reaches its limits. The limitation occurs even with the complex and cumbersome incremental or differential concepts that are currently available in SAP HANA. Backups that use the classic backup approach for SAP HANA have a negative performance impact on live production systems. In addition to the extra resources that are needed, these backups also require much more time to complete, which puts the RTO at risk. IT departments have a few alternatives, none of them ideal: • Negotiate a longer RTO with business owners who have SAP HANA systems. • Explain to business owners that their most valuable systems must run with reduced data protection to mitigate performance side effects. Invest significantly in IT infrastructure, such as new networks and long-term datastores, to handle the traffic and data volume. Advantages at a Glance • Eliminates the SAP HANA backup problem: Reduces a multiple-hour process to just seconds Accomplishes the task without creating side effects Gives you control of your SAP HANA recovery time objective (RTO) • Delivers recovery within minutes • Offers end-to-end integration of data protection: Provides visibility in SAP HANA Studio and Cockpit Works with NetApp® SnapCenter® backup management software • Handles SAP HANA backup catalog, including log backup • Provides a path to offload backup to the cloud How You Benefit • Implement the data protection goals defined by your business. • Unleash the power of your servers for SAP HANA database operations instead of running backup software and congesting the network with backup traffic. • Shift investments from backup infrastructure to productive components of the business. • Gain flexibility and choice from leveraging cloud technologies for long-term archival storage. Rethinking Data Protection for In-Memory Databases: Cloud-Enabled SAP HANA Backup

Upload: others

Post on 28-May-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NetApp Solution Brief - Rethinking Data Protection for In-Memory … · 2019-03-16 · Solution Brief As the newest waves of database technologies move into the mainstream, they are

Solution Brief

As the newest waves of database technologies move into the mainstream, they are already positioned to become the standard for business applications. In-memory data-bases are rapidly gaining market share from the existing disk-bound relational database market. In-memory databases have proved to be the answer to the question of how today’s CPU-based and memory-based architectures can deliver both transactional and analytic capabilities.

SAP HANA is the leading in-memory platform. It consolidates both transactional and analytical workloads into a single database platform. Beyond simply consolidating data in the database, however, SAP HANA’s compressed columnar storage model organizes the data very differently from the “classic databases” of old. That difference increases the internal data change rates and their associated backup volumes.

One effect is that the task of managing incremental changes and backup volumes becomes both difficult and critical at once. This transformation pushes traditional backup concepts up to or even beyond their limits and requires rethinking approaches to data protection. Organizations must find new ways to perform these business-critical data protection tasks instead of returning to old backup methodologies that are inefficient and expensive.

Can You Apply Traditional Backup Concepts to SAP HANA?Yes, you can still apply traditional backup concepts, but that decision comes at a noticeable cost.

With the traditional approach, a backup agent next to the SAP HANA system collects and packages all the data, and the backup software transports the collected data to long-term storage. This concept works, and it is widely used. But it can work only as long as the data change rates are manageable, as they were with classic disk-bound databases.

With columnar in-memory stores, however, this backup concept quickly reaches its limits. The limitation occurs even with the complex and cumbersome incremental or differential concepts that are currently available in SAP HANA. Backups that use the classic backup approach for SAP HANA have a negative performance impact on live production systems. In addition to the extra resources that are needed, these backups also require much more time to complete, which puts the RTO at risk.

IT departments have a few alternatives, none of them ideal:

• Negotiate a longer RTO with business owners who have SAP HANA systems.• Explain to business owners that their most valuable systems must run with reduced

data protection to mitigate performance side effects.• Invest significantly in IT infrastructure, such as new networks and long-term datastores,

to handle the traffic and data volume.

Advantages at a Glance • Eliminates the SAP HANA backup

problem: – Reduces a multiple-hour process

to just seconds – Accomplishes the task without

creating side effects – Gives you control of your SAP HANA

recovery time objective (RTO)• Delivers recovery within minutes• Offers end-to-end integration of data

protection: – Provides visibility in SAP HANA

Studio and Cockpit – Works with NetApp® SnapCenter®

backup management software• Handles SAP HANA backup catalog,

including log backup• Provides a path to offload backup

to the cloud

How You Benefit • Implement the data protection goals

defined by your business.• Unleash the power of your servers

for SAP HANA database operations instead of running backup software and congesting the network with backup traffic.

• Shift investments from backup infrastructure to productive components of the business.

• Gain flexibility and choice from leveraging cloud technologies for long-term archival storage.

Rethinking Data Protection for In-Memory Databases: Cloud-Enabled SAP HANA Backup

Page 2: NetApp Solution Brief - Rethinking Data Protection for In-Memory … · 2019-03-16 · Solution Brief As the newest waves of database technologies move into the mainstream, they are

The NetApp AlternativeIs there a data protection solution that is less complex and has a simpler infrastructure? And is there a storage and backup archi-tecture that reduces data movements during backups? Scalable storage snapshots are the answer. These copies are an efficient and elegant way to implement lightning-fast data protection for all SAP HANA workloads.

NetApp FAS and All Flash FAS platforms provide a persistent data storage area for SAP HANA in-memory databases. These storage systems are aware of fine granular data changes. They can create and save multiple versions of data in a way that makes efficient use of time and space and that avoids performance impact regardless of scale. They enable backups with a fraction of the traditional space requirements and in seconds rather than hours.

The result is that the database data is regularly preserved with-out burdening the SAP HANA servers with the tasks of handling and transporting data. A storage snapshot–based backup does not affect the SAP HANA server, and the backup is completed almost instantly. Because NetApp FAS and All Flash FAS systems are aware of these internal data changes, the storage system is able to transport only the differences between the primary location and the long-term backup stores. It distinguishes these differences very efficiently and at a granular level. Figure 1 shows this simple but powerful architecture.

SAP HANA has supported the use of storage-level backups that are based on NetApp Snapshot technology since SPS 07. These NetApp backups are fully integrated into SAP HANA Studio. As Figure 2 shows, they appear in the backup history just as any other backup would. The backup in this example was performed

in just 11 seconds. In a classic backup infrastructure, the effective bandwidth to realize backups at this speed would require more than fifty 10GbE connections.

Recovery occurs at a similar speed, and it is readily available to the SAP HANA database administrator. To recover a backup, the administrator simply has to shut down SAP HANA, select a backup, restore the corresponding storage Snapshot copy, and recover SAP HANA to complete the procedure. Having these backups stored directly on the storage controller makes the restore process fast and efficient because no data is physically moved.

Evaluation of customer data has shown that the average backup time for SAP HANA is 19 seconds on NetApp FAS and All Flash FAS systems. The vast majority of systems actually complete backups in less time. In this analysis, as Figure 3 shows, all sys-tems had finished in less than a minute. The largest contributor to the overall backup time is the time that HANA requires for writing the synchronized backup save point. The amount of time required for writing the save point is a function of the memory and activity of the SAP HANA system.

You no longer have to compromise on your data protection SLAs for SAP HANA. The moment that storage-based backups become part of your backup strategy, you can increase backup frequency to meet the needs of your business. The process is so fast that you can easily run multiple backups per day. You can

Data

SAP HANA database nodes

Data access + storage backup based on NetApp Snapshot™

19-second core backup architecture:• Is superfast, with high frequency• Keeps copies from last week

Soft

war

e In

tegr

atio

n

Figure 2) SAP HANA Studio backup window with NetApp storage–based backup (completed in just 11 seconds).

Figure 3) SAP HANA backup time evaluation on NetApp FAS and AFF storage.

100%

80%

60%

40%

20%

0%1 10 19 28 37 46 55

End to End Backup TimeAverage

Percentage of completion

Backup timedistribution

Figure 1) The NetApp core backup architecture.

Page 3: NetApp Solution Brief - Rethinking Data Protection for In-Memory … · 2019-03-16 · Solution Brief As the newest waves of database technologies move into the mainstream, they are

also control the number of logs to be applied. Ultimately, you can regain control of the RTO and the recovery point objective.

The SAP HANA Administration Guide recommends the use of storage-based backup for SAP HANA.

What About Disaster Recovery and Cloud Backup?The value of this core backup architecture can be extended to address multiple protection goals and usage scenarios. The main purposes are to preserve long-term backups and to make good use of that data. The backup functionalities are independent building blocks. However, as Figure 4 shows, they can be implemented together according to customer requirements.

Disaster recovery building blockYou can implement disaster recovery (DR) and long-term archival storage either offsite or onsite. The DR building block is achieved by a system that is optimized for capacity. The system receives new updates from the core backup architecture at regular definable intervals. These updates are typically scheduled to be asynchronous, and they don’t affect run times for the backups.

There is a great advantage in the way that data is represented on the DR systems. Representing each individual backup as a usable copy of the real data makes it an excellent vehicle for DR. These copies also provide a good source for test systems or quality assurance systems that require recent or up-to-date data.

Cloud backup building blockImplement this building block to secure your data with cloud standards and technologies. NetApp AltaVault can receive backup data directly from your databases, from existing backup software, or from your DR building blocks. It persists data in the widely used cloud object storage protocol Amazon Simple Storage Service (S3).

A variety of Amazon S3 bucket providers exist for off-premises archival storage. For cloud-ready on-premises archival storage, NetApp recommends that you use StorageGRID Webscale object storage. This software provides Amazon S3 buckets inside the trust domain of your data center. AltaVault highly compresses data and encrypts it according to industry standards.

SAP HANA can store file-based backups directly in NetApp AltaVault. The NetApp SnapCenter HANA plug-in can automate this process. Backup data from the DR building block can also be the subject of long-term archival storage.

ConclusionThe in-memory data management capability of SAP HANA exemplifies a breakthrough technology that is becoming increasingly pervasive. In-memory technology’s accompanying backup challenges can be handled by taking a storage-centric approach. NetApp provides the industry-leading backup solution for SAP HANA through its simple and easy-to-use architecture. The trend of leveraging more and more cloud and hybrid cloud technologies aligns well with the NetApp roadmap to offer our customers a seamless path toward future deployment decisions.

About NetAppNetApp is the data authority for hybrid cloud. We provide a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with our partners, we empower global organizations to unleash the full potential of their data to expand customer touchpoints, foster greater innovation and optimize their operations. For more information, visit www.netapp.com. #DataDriven

Figure 4) SAP HANA core backup architecture with DR and cloud archival options.

SAP HANA database nodes

Data access + storage backup based on NetApp Snapshot™

19-second core backup architecture:• Is superfast, with high frequency• Keeps copies from last week

Soft

war

e In

tegr

atio

n

Data BackupData

▼ ▼

NetApp SnapVault®

backup software

DR and long-term storage, o�-site/on-site (based on NetApp

Snapshot technology)

AVA

NetApp AltaVault™ cloud-integrated storage appliance

NetApp StorageGRID®

Webscale

▼▼

Long-term, o�-site archival storage (file-based backup: full/incremental/di�erential) Hyperscalers

O� premises

On premises

Service providers (Amazon S3 service)

© 2017 NetApp, Inc. All Rights Reserved. NETAPP, the NETAPP logo, and the marks listed at http://www.netapp.com/TM are trademarks of NetApp, Inc. Other company and product names may be trademarks of their respective owners. SB-3750-0917