paul rivot - ibm · paul rivot. director ... when ibm adds features to db2, ... active/active...

34
Name Title: Paul Rivot Director – Competitive Sales, Information Management Break Free with IBM DB2

Upload: hoangtuong

Post on 24-Jul-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Name

Title:Paul RivotDirector – Competitive Sales, Information Management

Break Free with IBM DB2

Page 2: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

When you hear…

DB2

what do you think?

2

Page 3: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Performance

Security

Reliability

Manageability

Governance

Ease of Use

3

Presenter
Presentation Notes
- The first thing you probably think of is IBM, and everything that goes along with being an IBM product. DB2 was one of the early IBM software products and, as such, paved the way for the strong reputation that IBM software enjoys in the industry today. - You may think of performance, and DB2’s leadership position in industry benchmarks. As you will see later in this presentation, IBM enjoys leading performance across a number of different kinds of workloads, including OLTP / transactional workloads, warehouse / analytical workloads, and various types of SAP application workloads. - You may think about security, and DB2’s unparalleled record when it comes to security breaches. - You may think about reliability. After all, that it one of the hallmarks of a DB2 database. - You may think about the fact that DB2 is ease to use and can support data governance with the family of integrated data management tools
Page 4: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Do you also think…

lowest cost of operation?

4

Presenter
Presentation Notes
In the remainder of this talk, we will see why “lowest cost of operation” should be the first thing you think of when you hear “DB2”. When IBM adds features to DB2, the first question we ask ourselves is “how does this feature lower the cost of operations”? And “how can we improve this feature to even further lower the cost of operations for our clients?”
Page 5: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

DB2 Business Value

US RetailersWorldwide Banks Global Life / Health Insurance Providers

Industry LeaderTop OLTP and Warehouse performance for XML and

Relational data

Lowest CostUnparalleled performance,

automation, and compression

Highest PerformanceTop speed, reliability, and

automated workload management

#1 in TPC-C Performance

#1 in 10TB TPC-H Performance

#1 in SAP SD 3-tier Standard Application Performance

“Before we made a final decision we benchmarked some of the key database management systems. That includes Oracle, SQL Server and DB2. We ended up choosing DB2 for several reasons. One was reliability, second was performance and perhaps the most important factor was ease of use” – Bashir Khan, Director of Data Management and Business Intelligence

Presenter
Presentation Notes
DB2 has been extremely successful for IBM. 2009 will be a tough year for selling so we want to focus on how we can save our customers $ and improve their overall performance. Today we have the lowest cost through the use of our industry leading Row compression and Autonomic Software. We can boast of having very high performance and reliability to handle Mission critical applications as evidenced by our industry leading TPC-C, TPC-H and SAP SD benchmarks. Workload mgt. the ability to give priorities based on business needs eg. The CEO gets the highest priority whereas a clerk running reports would have a much lower priority. Workload mgt allows you to control the workloads that reflect the needs of the business And we need less people to run our systems than those of our competitors. Add all this to the industries only Hybrid DB that can handle both a relational and XML data. We are at the forefront in XML in both performance and ease of development. Any metric you want to use, we can win. When it comes to benchmarks we can set one up in days whereas oracle and MS take weeks and then we still beat them 30 ro 40 to 1 (Arvinds words) And with SAP, we are simply unbeatable. Look at this great quote from the Dow Jones on their experiences when making a final decision on the DB they would use.
Page 6: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Our Clients Weigh In: Why DB2?

DB2

is the highest value database

in the market: 1.

DB2 has one of the lowest people cost requirements.

DB2 requires 55% less DBA time

vs

Oracle, Solitaire Interglobal

analysis; average cost per DBA saved:

2.

DB2 is the smartest database in the market.

DB2 Autonomics; self-optimizing, self-healing, self-configuring and self-protecting, DB2 has the ability to monitor its own health status -

independent of the database administrator (DBA).

3. DB2 reduces storage costs better than any other vendor.

“With DB2 9, we’re seeing compression rates up to 83 percent

on the data warehouse tables. The projected cost savings are more than US$2 million initially with ongoing savings of US$500,000 a year.”

-

Michael Henson, DB2 Unix Team Lead, Data-base Delivery Services, SunTrust Bank, Inc.

4.

DB2 is the fastest database.

DB2 at largest SAP/Oracle customer in Germany•

DB2 up to 9 times faster•

DB2 on average over 40% faster

5.

DB2 has the highest availability.

"DB2 was heavily favored

in this regard... with fewer and shorter outages for normal operational activities, the

overall availability and reliability of the DBMS shows some clear differentiation."

— Solitaire Interglobal

6.

DB2 is the most scalable database.

“IBM DB2 provides excellent reliability, security and scalability, and ensures that Britannia is fully able to increase its business operations at low total costs of operation.”

Britannia

Page 7: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

DB2 has lower acquisition cost and has lesser DBA requirement

Page 8: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

DB2 Advanced Edition acquisition Cost Vs. Oracle

Functionality DB2 Price Oracle PriceCore Server DB2 Enterprise $40,500 Oracle Enterprise $57,950Compression Storage Optimization feature $15,300 Advanced Compression $14,030Workload Management Performance Optimization

Feature$15,300 Workload management Free

Disaster Recovery HADR Free on primary

Active Data Guard $14,030

Advanced Security Label Based Access Control $11,100 Label Security $14,030Data Partitioning Range Partitioning Free Partitioning $14,030Administration Optim Database Administrator $5,775 Oracle Enterprise Mgr Free

Change Mgmt Pack $4,270Development Optim Development Studio (10

users)$8,660 Internet Dev Suite $7,076

Performance Tuning Optim Performance Manager (existing price Included in Perf Opt Feature)

Diagnostics Pack $6,100

Federation Heterogeneous Federation Feature

$7,353 Oracle to Oracle federation Free

Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Total $115,088 $152,866

Advanced Enterprise $45,000 $152,866

Presenter
Presentation Notes
Drew/Jamie/Doug notes Better, Simpler, Cheaper – That‘s DB2 Advanced Edition! 2X speed is from most recent SAP results Competing against Oracle - Winning NEW customers with DB2 AE All-in-one, simple to sell, significant Value, one low price DB2 AESE is only 10% more than ESE (more value, less discounting) Adding in OPM EE Tradeup and OQT adds $90x100=$9000 to the $45,000 cost of DB2 Advanced Edition. Oracle does not have the functionality included in EE. Oracle Tuning Pack adds $6100 to $152,866.
Page 9: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Solitaire INTERGLOBAL Analysis

• Collected over 3,467,000 data points covering over 36,600 systems– 650 AIX and 3,450 Intel-based production

systems observed

• Reveals less DBAs

required for

DB2 vs. Oracle• Reveal less downtime for DB2 vs.

Oracle• Reveals faster time to market

• Note that even though the number of staff to support this small section of machines has a small variation, given FTE rates that run in excess of $97,500 in most operations, the 15 FTE difference accumulates quickly.

Read about it here

DB2 requires 55% less DBA time

Solitaire Interglobal Study - 2011

Presenter
Presentation Notes
“The advantages of running DB2 on IBM Power and System x equipment are strongly supported by more than 3,467,000 data points covering over 9,200 closely watched production comparisons. These advantages translate into hard cost savings for each customer that can substantially affect the bottom line cost of ownership. The real world affect on business is considerable. This benefit is most achievable when a good fit between the DBMS and hardware platform is employed. Allowing the strengths of the System x and Power System platforms to mingle with the strengths of the DB2 product is one of the best ways to support the customer in their continuing quest for lower costs and improved user satisfaction.” -- Solitaire Interglobal Ltd.
Page 10: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Complexity analysis summary

0 100 200 300 400 500 600 700 800

Authorization

Automatic Memory Management

Backup and Recovery

Index Compression

Data Compression

Installation

DB2 9.7

Oracle 11gR2

The IBM Advantage -

IT Staff ProductivityEasy of use: DB2 vs. Oracle Database complexity analysis

• DB2 holds a significant advantage in all of the six common DBA activities

• DB2 complexity advantage translates into higher DBA productivity

10

Source: A IBM sponsored report by Triton Consulting: Comparing DBA Productivity: An Oracle/DB2 Task Complexity Analysis, November 2010

Presenter
Presentation Notes
So now, I've finished talking about factors that are important around the performance of your systems and cost efficiency in getting the most from your investment in those systems. I'd like to now go on and spend a few minutes talking about improving the productivity of your IT staff. On this chart, you can see the results of an independent study, a company called Triton Consulting that is based in the UK. They got an Oracle database DBA with 10-years experience, so he was quite an experienced DBA, they got the equivalent with DB2, again an experienced DBA, and they leveraged a methodology for determining the complexity of performing certain tasks. The methodology they're using intends to do a few things: it intends to abstract out truly how complex are relative tasks to execute and try to do it in an objective way. And you can see summary of their analysis for a number of tasks around database administration. Some of the tasks are something you perform once, such an installation, but other tasks like automatic memory management are some things that are part of a daily routine for a DBA. As an example, the study found the auto memory complexity metric for DB2 is nearly 90% lower than the complexity metric for Oracle Database. Report shows that the automatic memory management task in Oracle database for the specified environment could take over 100 minutes of DBA interaction time to complete. In contrast, the same automatic memory management task for DB2 would take a little over 10 minutes. So you can see here the results of the significant investment that IBM has been making around trying to make its products easier to use. And I think there's a reflection of a number of things, including the fact that IBM DB2 has done such a good job of adding features recently that make things easier for DBAs.
Page 11: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

“DB2’s automated administration features are going to save me 30% to 35% of my support costs.” —Bob Maddocks, Maddocks Systems

10

Presenter
Presentation Notes
Maddocks Systems provides transportation carriers with comprehensive information management solutions. They spent over two decades providing North America's top trucking companies with a complete package of software for enterprise-wide fleet management, before being recently acquired by TMW Systems. Maddocks had a multi-database environment. However, they decided to standardize on IBM DB2 because of the market-leading autonomic features, better scalability and more robust performance to its enterprise-class customers than competitive products. According to Robert Maddocks, DB2 software was also a smart business decision. “IBM DB2 has all of the features we need with the least amount of overhead and maintenance costs, which makes its price/performance very attractive.” The autonomic features in the DB2 UDB software—including business policy-driven backup and database maintenance, self-tuning backup and restore, inclusion of log files in online backup images, and integrated and automated log file management—make it easier for customers to maintain their TruckMate applications while keeping IT costs low. Robert Maddocks is also excited about the leading-edge auto statistics and auto table reorganization features as well as its throttling and policy-based capabilities. These all enable Maddocks to achieve market share in the small and medium business arena as well as in the mid-market sector. The decision to standardize on the DB2 UDB software is already paying dividends for Maddocks. The company recently signed an agreement with Canada’s largest food manufacturer, to provide them with the new release of TruckMate. Based on customer feedback, the principle reason this industry leader selected TruckMate was because it supported DB2 UDB software. Robert Maddocks concludes, “DB2 Universal Database provides Maddocks with a real competitive advantage over any other database on the market. Now my customers can focus on running their businesses and we can focus on our customers.” ---- Notes: You can read more at http://www.maddocks.ca/news/media/Maddocks_IBM.pdf
Page 12: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

As the FASTEST Database, DB2 Saves you Money

Page 13: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

TPC-C DB2 versus Oracle on the Same Server

• This result offers a direct comparison between DB2 and Oracle on identical servers

• DB2 delivered 16 -20% better performance running on the same hardware

– Benchmarks run within 30 days of each other (Oracle first, then DB2)

– This result also delivered the highest per/CPU performance ever recorded

• You need 10 CPUs of Oracle to match the 8 CPUs of DB2 at this ratio

• New Info/Baan benchmark just being published also shows a 16-20% performance advantage on identical servers

Page 14: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Latest TPC Benchmarks 14th Feb 2011

http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=110081702 http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=110120201

10,336,254 tpmC

24 Processors192 Cores

30,249,688 tpmC

108 Processors1728 Cores

Presenter
Presentation Notes
When you click in the website, it will show you that IBM uses 192 cores to deliver 10M tpmc and Oracle with Sun uses 1728 cores to generate 30M tpmc. May I know how much licenses difference are between 192 cores and 1728cores? The difference is more than 1500 processors. If we need to use Oracle/sun to deliver 10M tpmc, that will require about 500 processors which is about closed to 3 times what DB2 on P Series server can do.
Page 15: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

IBM POWER7 and IBM Software Optimization Advantages

73% faster JVMusing a single JVM of WebSphere on POWER7 vs. competitive application server on Nehalem1

86% lower costfor DB2 on IBM Power 780 than Oracle on Sun2

40% better utilizationUp to 40% better system utilization with the latest compilers, exploiting POWER7 architecture3

1 IBM CPO Internal Study2 As much as 40% improved throughput vs. Power6 for the identify duplicates process One example of performance improvement, TSM 6.2 3 CPO Study - DB2 on POWER7 Delivers The Most Efficient TPC-C Result EVER!4 IBM POWER7 TPC-C Result: IBM Power 780: 10,366,254 tpmC at $1.38USD/tpmC avail 2010/10/13, (24proc/192core/768thread) Oracle Sun TPC-C Result: Sun SPARC

Enterprise T5440: 7,646,486 tpmC at $2.36USD/tpmC, avail 2010/03/19, (48proc/384core/3072thread). TPC-C results available at www.tpc.org. 5 – Solitare Study

2.7x faster per coreOn POWER7 than the best Oracle/Sun TPC-C result4

55% less stafffor DB2 on Power 780 than Oracle5

41% lower transaction costOn POWER7 than the best Oracle/Sun TPC-C result4

Presenter
Presentation Notes
Many of our customer have saved Cost, Increase Utilization, Boost per Core Performance just by Implementing DB2 on Power 7,
Page 16: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Better database performance means…

lower server costs and

lower power costs

20

Presenter
Presentation Notes
The key take-away here is that DB2 has industry-leading performance. And superior performance translates into getting more performance from your servers. Getting more performance from your servers translates into lower server costs. It has also been shown that getting more performance from your servers translates into lower power costs. This is because CPU utilization rates directly correspond to the number of watts of power that are consumed. So, if database software needs lower CPU utilization, then it will draw less power.
Page 17: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

DB2 Reduces Storage Costs Better Than Any Other Vendor

Page 18: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Burdened Database Rate: I Have a 4 TB Database

4000 GBDisaster Recovery

Production

1000 GB Backup29,000 GB

Total

Total Storage for 4TB DB Without compression = 29TB

With 65% compression

10TB

500 GB

500 GB

500 GBDevelopment

Test

User Acceptance

4000 GB

10,000 GBTotal

Page 19: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

The DB2 Compression Advantage

• Superior compression rates due to DB2 algorithm– DB2 compresses data by looking at all values in the table– Other vendors only remove duplicates at the page/block level– Disadvantages of page level approach

• Consistent repeating values throughout the entire table will be stored multiple times in each page header

• There may be repeating patterns in the table but not on each page

“Row-level compression is a revolutionary development that will leave Oracle and Microsoft green with envy”.

Table Compression Ratio

LINEITEM 38% 58% (1.5x better)

ORDERS 18% 60% (3x better)

Oracle DB2

Page 20: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Coca-Cola Bottling Caps Rising Costs of Oracle $1.4B manufacturer improves performance while

reducing costs

The need: • Optimize performance and manageability of

business-critical SAP applications with SAP upgrade and conversion to Unicode

• Avoid increased Oracle licensing costs

The solution:• Replace the Oracle database with DB2,

reducing licensing and TCO costs while improving performance

Solution components:• IBM DB2 software• IBM POWER Systems hardware• IBM-SAP Alliance services

Source: ftp://public.dhe.ibm.com/common/ssi/ecm/en/spc03074wwen/SPC03074WWEN.PDF

Presenter
Presentation Notes
Contact information ATHMAKURI, RAVI S.�Sr. SAP OS/DB Migrations Consultant�1-267-633-4808�Ravi Athmakuri/Philadelphia/IBM�[email protected] Additional contact information: Anette Lörcher�IBM SAP Global Alliance��Ravi Athmakuri/Philadelphia/IBM�[email protected] Highlights Overview: Coca-Cola Bottling Co. Consolidated (CCBCC) makes, sells and delivers sparkling and still beverages, primarily products of The Coca-Cola Company. CCBCC is the second-largest bottler of Coca-Cola products in the United States, operating in eleven states, primarily in the southeast. Founded in 1902, CCBCC enjoys net sales of more than $1.4 billion and is headquartered in Charlotte, North Carolina. Business need: To achieve its business objectives, CCBCC faced a technical upgrade of its SAP R/3 Enterprise system to SAP ERP 6.0. Completing this upgrade would require the company either to upgrade its existing Oracle database and purchase extra Oracle licenses, or to move to a different database platform. Solution: The CCBCC team decided it was time to derive more performance from the business-critical SAP applications, while driving down hardware and software costs. Instead of upgrading Oracle, the team decided to introduce IBM DB2. As part of the SAP upgrade project, CCBCC’s SAP R/3 system would require a conversion to Unicode. Benefits: Combining the database migration with the SAP Unicode conversion saved time and money. Initial results show that DB2 delivers a reduction in storage needs of approximately 40 per cent. The duration of manufacturing runs was reduced by more than 65 per cent. The migration was completed under budget and ahead of schedule. The company has reduced licensing and maintenance costs by avoiding the purchase of additional Oracle licenses, and predicts savings in the next five years of about US$750,000. Back to top Client information Client name:Location:Coca-Cola Bottling Co. ConsolidatedNorth America IOT,  United StatesIndustry: Wholesale Distribution & ServicesFocus area: Integrated Data Management, Enterprise Resource PlanningGeneral Business: Enterprise AccountGeography: North America IOTDate Published: 01/09/2009Last Updated: 07/05/2010Link to: External Case StudyBack to top Solution implementation [To read a German translation of this technical paper, please click here.] Background, starting point and objectives�Coca-Cola Bottling Co. Consolidated (CCBCC) makes, sells and delivers sparkling and still beverages, primarily products of The Coca-Cola Company. CCBCC is the second-largest bottler of Coca-Cola products in the United States, operating in eleven states, primarily in the southeast. Founded in 1902, CCBCC enjoys net sales of more than $1.4 billion and is headquartered in Charlotte, North Carolina.��Leveraging synergies: SAP Unicode conversion and DB2 migration�Prior to the technical upgrade of the SAP landscape, CCBCC decided to perform both a Unicode conversion and a migration from the existing Oracle database platform to IBM DB2 with Deep Compression. These changes would eliminate the need to buy new Oracle licenses, and would thus reduce the total cost of ownership (TCO). ��By switching on the DB2 Deep Compression feature during the migration, the company was able to reduce the size of the database by more than 40 per cent – which will result in faster backups and shorter runtimes for the upcoming SAP software upgrade.��In the meantime, before the SAP upgrade, CCBCC can benefit from the highly automated DB2 database administration, offering reduced cost of operation. DB2 version 9 includes features such as self-managing storage, self-tuning memory management (STMM), automatic reorganization, automatic runstats, real-time statistics and backup via the integrated FlashCopy® feature. ��All database administration and monitoring tasks can be completed from within the SAP Database Administrator (DBA) Cockpit for DB2 – an easy-to-use management environment integrated into the SAP application environment. ��Deploying Unicode as a future-proof solution�CCBCC decided to deploy Unicode because all new SAP product releases (from SAP NetWeaver 7.0 onwards) will be based on the Unicode standard. CCBCC wanted to be prepared for new SAP applications such as SAP NetWeaver Process Integration (SAP NW PI), which are already part of future implementation plans.��In technical terms, the requirements for a Unicode conversion are very similar to those of a database migration. In both scenarios the customer must perform an export and import of the database using the SAP program R3load. ��The Unicode conversion itself is executed during the export phase of the migration. It is therefore very easy to direct the database towards a new target system without additional effort and downtime. Migrating to IBM DB2 in conjunction with an SAP software upgrade and/or Unicode conversion leverages an opportunity to avoid duplicating project tasks such as backup and testing, and keeps the cost of the migration as low as possible.��Migration process – Heterogeneous System Copy�CCBCC used a standard SAP methodology for the migration process, known as the Heterogeneous System Copy (or OS/DB Migration) method. CCBCC was able to perform the migration and conversion during a scheduled maintenance window, so there was no need to make use of enhanced migration tools/services from SAP such as Zero Downtime. ��The migration project for the entire SAP R/3 Enterprise landscape took eight weeks in total, including two test iterations for the 1TB production database. The migration of the production SAP system itself was completed over one weekend, starting on the Saturday night and finishing in the early hours of Monday morning. The total downtime for the production migration was just 26 hours. ��To achieve this reduced downtime, a set of SAP specific migration tools were used: Unsorted Export for the transparent tables Package Splitter for the largest tables (“big tables” group) Table Splitter for three large cluster tables Multiple instances of Migration Monitor to allow distributed parallel import and export processes R3load with Deep Compression option to activate compression during the migration phase. �The next part of this document depicts the way CCBCC utilized these tools, explains the reasons for the choices, and highlights the benefits.��Architectural overview – migration project at CCBCC �For the migration, CCBCC used four logical partitions (LPARs) on an IBM Power Systems server (model p5-560). Three LPARs were used to handle database export processes from the source system, and one LPAR was running the target system for the import processes. The export partitions consisted of a Central Instance / Database partition, which had 16 CPUs of 1.5GHz and 64GB of memory (CI/DB), and two other partitions that had four CPUs of 1.5GHz and 12GB of memory each. The import partition (or new CI/DB partition) had 16 CPUs of 1.5GHz and 64GB of memory.��During the testing phase, this system setup emerged as the optimal migration environment to handle the migration workload.��In order to meet the downtime objectives, the workload of the export packages were distributed between the CI/DB server and the other two servers (Hosts A and B) running in the first three LPARs. The CI/DB server handled the 3 largest cluster tables via Table Splitter. Host A handled the smaller tables. Host B was used to handle the export of the “big tables” group (which contained >10 million, >2 million, and >200,000 records); these were divided into smaller packages using Package Splitter. All three hosts used local storage to dump the export data to disk. Each export process was controlled by a Migration Monitor (MigMon) instance with its own configurations.��On the import side there was only one server – Host C (new CI/DB server). The export disks of CI/DB, Host A and Host B were mounted via NFS (for reading) on Host C. The import was controlled by multiple MigMon instances. ��From the “big tables” group on Host B, a subset was exported using the sorted unload option, which required additional CPU power and was one of the reasons for assigning an additional server for the export phase. During the import, the tables from the “big tables” group were compressed during the load process. ��Database export – migration tools used�Unsorted vs. sorted export�CCBCC used both sorted and unsorted exports to unload the data from the Oracle database. In general, the unsorted export is faster than the sorted. But as CCBCC was also running a Unicode conversion, the migration team was forced to export the SAP cluster tables (for example CDCLS, RFGLG, EDI40) and SAP repository data classes via a sorted export. Sorting the data required additional CPU power, which was one of the reasons CCBCC handled the export phase with three servers. Sorted Export – Pool Tables, Cluster Tables, Reports, Dynpro’s and Nametabs. Unsorted Export – Most of the transparent tables �With a sorted export, the pages of a table are read in the sequence of the primary key. If the cluster ratio is not optimal, data pages will not be read continuously. In addition, database sort operations may occur which will also extend the export runtime. By using the unsorted option, data is read sequentially and written directly to a file, instead of using an index that attempts to sort the data before writing to the file.��Unicode considerations for cluster tables �As a result of the Unicode conversion, the contents and the length of the records may change. Even the number of the physical records belonging to a logical record may change. Because the physical records are built together to form a logical record, the data must be read in a sorted manner to find all physical records that belong to a logical record. For these reasons, an unsorted unload is not possible.��Database limitations �DB2 supports unsorted exports, but some other databases only allow sorted exports. This represents a major roadblock in migrating away from these databases, and can also be a limitation in daily operations – for example, it is more difficult to set up test and QA systems using sorted exports. Especially for very large databases, being forced to run a sorted export will heavily extend the downtime window and make it almost impossible to change the database or even complete a Unicode conversion in a reasonable time. ��Package and table splitting�The database size of nearly 1TB and the very large tables had been the determining factors for the downtime. CCBCC decided to parallelize the database export to improve the speed of the whole migration process, by using Package and Table Splitters.��Package Splitter splits tables of the source database into packages and exports them. In each case a dedicated R3load process handles each package. These processes can run in parallel and consequently make better usage of the CPU power. Table Splitter R3ta generates multiple WHERE conditions for a table, which are used to export the table data with multiple R3load processes running in parallel. Each R3load process requires a WHERE condition so that it can select a subset of the data in the table.� 262 large tables (“big tables” group) were put in their own package using Package Splitter, to increase parallelism and ensure better granularity of the packages, resulting in better resource usage during the migration. 12 very large tables were divided into multiple packages using Table Splitter, enabling multiple R3load processes for parallel export and import of the table The remainder of the tables were combined in joint packages, using Package Splitter. By splitting the content to multiple R3load processes (20 parallel processes) it was possible to export and import the data in parallel, saving considerable time. �Migration Monitor (MigMon)�In a Unicode conversion, the system copy causes very high CPU load during the export. Most of the CPU power is spent on data conversion, especially when processing cluster tables. �To avoid CPU bottlenecks, CCBCC distributed the exports and imports across 4 LPARs to parallelize these processes more effectively. This allowed CCBCC to take advantage of additional processor resources for the database export/import. The Migration Monitor helped to perform and control the unload and load process during the system copy procedure and enabled 20 export and import processes to be run in parallel. ��Database import - DB2 Deep Compression enabled�DB2 9 – Storage Optimization feature�The DB2 9 Storage Optimization feature – also called Deep Compression – uses a dictionary-based approach to replace repeating patterns with short symbols. The dictionary stores the patterns that occur most frequently, and indexes them with the corresponding symbols that are used to replace them. Due to the fact that all patterns within a table (not only within a single page) are replaced, impressive compression rates can be achieved (up to 90 per cent for single tables). ��R3load with DB2 Deep Compression:�CCBCC wanted to make use of the benefits that the DB2 Storage Optimization feature offers right away, and decided to switch on Deep Compression during the migration process. Even with the knowledge that the compression rate with R3load version 6.40 might not be optimal, CCBCC decided to go ahead, and were rewarded with a compression rate of 40 percent and an impressive performance improvement. This was achieved despite the fact that only the 169 of the larger tables had been compressed. ��Enabling DB2 Deep Compression during database migration and/or Unicode conversion is a very smooth way to compress the data at the time it is loaded into the database. The R3load tool provides several ways of deploying DB2 Deep Compression when the data is loaded into the tables. Depending on the version of R3load (i.e. version 6.40, or version 7.00 or higher), different options for compression are available, such as the new R3load 7.00 “SAMPLED” option. ��This offers optimal data compression while avoiding time-consuming table reorganizations. In this paper we will focus on the compression feature of R3load version 6.40, as this was the tool used by CCBCC.��R3load 6.40 with compress option�To generate the compression dictionary, R3load first loads a defined number of rows into the table without compressing them. R3load creates the compression dictionary based on these rows by running an offline reorganization. ��CCBCC incremented the value of the environment variable “DB6LOAD_COMPRESSION_THRESHOLD” to define the number of rows that would be initially loaded and used to create the dictionary. The default value for this threshold is 10,000 records, which was too low to provide optimal compression sampling for the larger tables. ��By sampling between 10 and 80 percent of the records (depending on the number of rows in the tables), CCBCC was able to set optimal threshold values and achieve very good compression results. The two largest tables (COEP, BSIS) contained more than 130 million records, followed by several tables with between 10 and 70 million records.�CCBCC grouped the compressible transparent tables using the following row count thresholds:� Group of 20 tables of more than 3 million records; threshold = 3 million Group of 47 tables of more than 200,000 records; threshold = 200,000 Group of 102 tables of more than 60,000 records; threshold = 60,000 �Note that not all tables matching the thresholds were flagged for compression and added to those groups. Only the ones which showed good compression results in the test phase were selected.��After the initial import and the creation of the dictionary, R3load imports the remaining rows into the table and DB2 compresses the data based on the dictionary. ��Tables that are intended for compression during the load phase must have the compression attribute switched on. Since CCBCC had some tables that should be compressed and others that should not, different template files for the Migration Monitor were used. ��CCBCC ran the import with several instances of the Migration Monitor, and used different values for DB6LOAD_COMPRESSION_THRESHOLD for each instance. ��Summary�Combining the Unicode upgrade with a database migration paid off for CCBCC – enabling the company to leverage synergies throughout the whole migration process, and eliminate the duplication of processes such as backup and testing. The whole ERP migration project took about eight weeks from start to finish, including the Unicode conversion. ��Another essential aspect was the easy transfer of database management skills from Oracle to DB2, and the user-friendliness of DB2. CCBCC had strong in-house Oracle skills, and yet in a matter of weeks the database administrators became fully competent on DB2 – a tribute to the ease of transition to DB2 for experienced DBAs, regardless of their technical legacy.�CCBCC was able to benefit right away from the value DB2 offers: Lower TCO 40 per cent reduction in database size Better performance – manufacturing runs are over 65 per cent faster Better integration of the database in SAP tools (SAP DBA cockpit for DB2) Reduced DBA workload to manage and administrate DB2 �With DB2 in place, CCBCC is well prepared for the upcoming upgrade to SAP ERP 6.0, which can now be performed much more smoothly and rapidly. The reduction in database size by 40 per cent will result in faster backup and shorter runtimes for the SAP software upgrade. Solutions/Offerings Hardware: System p: System p5 560Q Software: DB2 9 for Linux, UNIX and Windows Services: IBM-SAP Alliance
Page 21: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Coca-Cola Bottling Caps Rising Costs of Oracle $1.4B manufacturer improves performance while

reducing costs

The IBM advantage:• DB2 deep compression reduced data storage

from just under 1TB to 575GB• Migration in <26 hours, ahead of schedule,

under budget• Higher Oracle licensing costs avoided

The benefits:• 40% reduction in storage, saving cost,

reducing backup times• 65% faster manufacturing runs: from 90

minutes to 30 minutes• 5 year cost savings $750,000 • An easier to administer system

Source: ftp://public.dhe.ibm.com/common/ssi/ecm/en/spc03074wwen/SPC03074WWEN.PDF

“By choosing to implement DB2 compression right away, we have reduced the database size by around 40 per cent. This gives us faster backup and reduced storage costs, and makes the SAP technical upgrades easier and quicker.”

— Andrew Juarez, SAP Lead Basis, Coca-Cola Bottling Company Consolidated

Presenter
Presentation Notes
Contact information ATHMAKURI, RAVI S.�Sr. SAP OS/DB Migrations Consultant�1-267-633-4808�Ravi Athmakuri/Philadelphia/IBM�[email protected] Additional contact information: Anette Lörcher�IBM SAP Global Alliance��Ravi Athmakuri/Philadelphia/IBM�[email protected] Highlights Overview: Coca-Cola Bottling Co. Consolidated (CCBCC) makes, sells and delivers sparkling and still beverages, primarily products of The Coca-Cola Company. CCBCC is the second-largest bottler of Coca-Cola products in the United States, operating in eleven states, primarily in the southeast. Founded in 1902, CCBCC enjoys net sales of more than $1.4 billion and is headquartered in Charlotte, North Carolina. Business need: To achieve its business objectives, CCBCC faced a technical upgrade of its SAP R/3 Enterprise system to SAP ERP 6.0. Completing this upgrade would require the company either to upgrade its existing Oracle database and purchase extra Oracle licenses, or to move to a different database platform. Solution: The CCBCC team decided it was time to derive more performance from the business-critical SAP applications, while driving down hardware and software costs. Instead of upgrading Oracle, the team decided to introduce IBM DB2. As part of the SAP upgrade project, CCBCC’s SAP R/3 system would require a conversion to Unicode. Benefits: Combining the database migration with the SAP Unicode conversion saved time and money. Initial results show that DB2 delivers a reduction in storage needs of approximately 40 per cent. The duration of manufacturing runs was reduced by more than 65 per cent. The migration was completed under budget and ahead of schedule. The company has reduced licensing and maintenance costs by avoiding the purchase of additional Oracle licenses, and predicts savings in the next five years of about US$750,000. Back to top Client information Client name:Location:Coca-Cola Bottling Co. ConsolidatedNorth America IOT,  United StatesIndustry: Wholesale Distribution & ServicesFocus area: Integrated Data Management, Enterprise Resource PlanningGeneral Business: Enterprise AccountGeography: North America IOTDate Published: 01/09/2009Last Updated: 07/05/2010Link to: External Case StudyBack to top Solution implementation [To read a German translation of this technical paper, please click here.] Background, starting point and objectives�Coca-Cola Bottling Co. Consolidated (CCBCC) makes, sells and delivers sparkling and still beverages, primarily products of The Coca-Cola Company. CCBCC is the second-largest bottler of Coca-Cola products in the United States, operating in eleven states, primarily in the southeast. Founded in 1902, CCBCC enjoys net sales of more than $1.4 billion and is headquartered in Charlotte, North Carolina.��Leveraging synergies: SAP Unicode conversion and DB2 migration�Prior to the technical upgrade of the SAP landscape, CCBCC decided to perform both a Unicode conversion and a migration from the existing Oracle database platform to IBM DB2 with Deep Compression. These changes would eliminate the need to buy new Oracle licenses, and would thus reduce the total cost of ownership (TCO). ��By switching on the DB2 Deep Compression feature during the migration, the company was able to reduce the size of the database by more than 40 per cent – which will result in faster backups and shorter runtimes for the upcoming SAP software upgrade.��In the meantime, before the SAP upgrade, CCBCC can benefit from the highly automated DB2 database administration, offering reduced cost of operation. DB2 version 9 includes features such as self-managing storage, self-tuning memory management (STMM), automatic reorganization, automatic runstats, real-time statistics and backup via the integrated FlashCopy® feature. ��All database administration and monitoring tasks can be completed from within the SAP Database Administrator (DBA) Cockpit for DB2 – an easy-to-use management environment integrated into the SAP application environment. ��Deploying Unicode as a future-proof solution�CCBCC decided to deploy Unicode because all new SAP product releases (from SAP NetWeaver 7.0 onwards) will be based on the Unicode standard. CCBCC wanted to be prepared for new SAP applications such as SAP NetWeaver Process Integration (SAP NW PI), which are already part of future implementation plans.��In technical terms, the requirements for a Unicode conversion are very similar to those of a database migration. In both scenarios the customer must perform an export and import of the database using the SAP program R3load. ��The Unicode conversion itself is executed during the export phase of the migration. It is therefore very easy to direct the database towards a new target system without additional effort and downtime. Migrating to IBM DB2 in conjunction with an SAP software upgrade and/or Unicode conversion leverages an opportunity to avoid duplicating project tasks such as backup and testing, and keeps the cost of the migration as low as possible.��Migration process – Heterogeneous System Copy�CCBCC used a standard SAP methodology for the migration process, known as the Heterogeneous System Copy (or OS/DB Migration) method. CCBCC was able to perform the migration and conversion during a scheduled maintenance window, so there was no need to make use of enhanced migration tools/services from SAP such as Zero Downtime. ��The migration project for the entire SAP R/3 Enterprise landscape took eight weeks in total, including two test iterations for the 1TB production database. The migration of the production SAP system itself was completed over one weekend, starting on the Saturday night and finishing in the early hours of Monday morning. The total downtime for the production migration was just 26 hours. ��To achieve this reduced downtime, a set of SAP specific migration tools were used: Unsorted Export for the transparent tables Package Splitter for the largest tables (“big tables” group) Table Splitter for three large cluster tables Multiple instances of Migration Monitor to allow distributed parallel import and export processes R3load with Deep Compression option to activate compression during the migration phase. �The next part of this document depicts the way CCBCC utilized these tools, explains the reasons for the choices, and highlights the benefits.��Architectural overview – migration project at CCBCC �For the migration, CCBCC used four logical partitions (LPARs) on an IBM Power Systems server (model p5-560). Three LPARs were used to handle database export processes from the source system, and one LPAR was running the target system for the import processes. The export partitions consisted of a Central Instance / Database partition, which had 16 CPUs of 1.5GHz and 64GB of memory (CI/DB), and two other partitions that had four CPUs of 1.5GHz and 12GB of memory each. The import partition (or new CI/DB partition) had 16 CPUs of 1.5GHz and 64GB of memory.��During the testing phase, this system setup emerged as the optimal migration environment to handle the migration workload.��In order to meet the downtime objectives, the workload of the export packages were distributed between the CI/DB server and the other two servers (Hosts A and B) running in the first three LPARs. The CI/DB server handled the 3 largest cluster tables via Table Splitter. Host A handled the smaller tables. Host B was used to handle the export of the “big tables” group (which contained >10 million, >2 million, and >200,000 records); these were divided into smaller packages using Package Splitter. All three hosts used local storage to dump the export data to disk. Each export process was controlled by a Migration Monitor (MigMon) instance with its own configurations.��On the import side there was only one server – Host C (new CI/DB server). The export disks of CI/DB, Host A and Host B were mounted via NFS (for reading) on Host C. The import was controlled by multiple MigMon instances. ��From the “big tables” group on Host B, a subset was exported using the sorted unload option, which required additional CPU power and was one of the reasons for assigning an additional server for the export phase. During the import, the tables from the “big tables” group were compressed during the load process. ��Database export – migration tools used�Unsorted vs. sorted export�CCBCC used both sorted and unsorted exports to unload the data from the Oracle database. In general, the unsorted export is faster than the sorted. But as CCBCC was also running a Unicode conversion, the migration team was forced to export the SAP cluster tables (for example CDCLS, RFGLG, EDI40) and SAP repository data classes via a sorted export. Sorting the data required additional CPU power, which was one of the reasons CCBCC handled the export phase with three servers. Sorted Export – Pool Tables, Cluster Tables, Reports, Dynpro’s and Nametabs. Unsorted Export – Most of the transparent tables �With a sorted export, the pages of a table are read in the sequence of the primary key. If the cluster ratio is not optimal, data pages will not be read continuously. In addition, database sort operations may occur which will also extend the export runtime. By using the unsorted option, data is read sequentially and written directly to a file, instead of using an index that attempts to sort the data before writing to the file.��Unicode considerations for cluster tables �As a result of the Unicode conversion, the contents and the length of the records may change. Even the number of the physical records belonging to a logical record may change. Because the physical records are built together to form a logical record, the data must be read in a sorted manner to find all physical records that belong to a logical record. For these reasons, an unsorted unload is not possible.��Database limitations �DB2 supports unsorted exports, but some other databases only allow sorted exports. This represents a major roadblock in migrating away from these databases, and can also be a limitation in daily operations – for example, it is more difficult to set up test and QA systems using sorted exports. Especially for very large databases, being forced to run a sorted export will heavily extend the downtime window and make it almost impossible to change the database or even complete a Unicode conversion in a reasonable time. ��Package and table splitting�The database size of nearly 1TB and the very large tables had been the determining factors for the downtime. CCBCC decided to parallelize the database export to improve the speed of the whole migration process, by using Package and Table Splitters.��Package Splitter splits tables of the source database into packages and exports them. In each case a dedicated R3load process handles each package. These processes can run in parallel and consequently make better usage of the CPU power. Table Splitter R3ta generates multiple WHERE conditions for a table, which are used to export the table data with multiple R3load processes running in parallel. Each R3load process requires a WHERE condition so that it can select a subset of the data in the table.� 262 large tables (“big tables” group) were put in their own package using Package Splitter, to increase parallelism and ensure better granularity of the packages, resulting in better resource usage during the migration. 12 very large tables were divided into multiple packages using Table Splitter, enabling multiple R3load processes for parallel export and import of the table The remainder of the tables were combined in joint packages, using Package Splitter. By splitting the content to multiple R3load processes (20 parallel processes) it was possible to export and import the data in parallel, saving considerable time. �Migration Monitor (MigMon)�In a Unicode conversion, the system copy causes very high CPU load during the export. Most of the CPU power is spent on data conversion, especially when processing cluster tables. �To avoid CPU bottlenecks, CCBCC distributed the exports and imports across 4 LPARs to parallelize these processes more effectively. This allowed CCBCC to take advantage of additional processor resources for the database export/import. The Migration Monitor helped to perform and control the unload and load process during the system copy procedure and enabled 20 export and import processes to be run in parallel. ��Database import - DB2 Deep Compression enabled�DB2 9 – Storage Optimization feature�The DB2 9 Storage Optimization feature – also called Deep Compression – uses a dictionary-based approach to replace repeating patterns with short symbols. The dictionary stores the patterns that occur most frequently, and indexes them with the corresponding symbols that are used to replace them. Due to the fact that all patterns within a table (not only within a single page) are replaced, impressive compression rates can be achieved (up to 90 per cent for single tables). ��R3load with DB2 Deep Compression:�CCBCC wanted to make use of the benefits that the DB2 Storage Optimization feature offers right away, and decided to switch on Deep Compression during the migration process. Even with the knowledge that the compression rate with R3load version 6.40 might not be optimal, CCBCC decided to go ahead, and were rewarded with a compression rate of 40 percent and an impressive performance improvement. This was achieved despite the fact that only the 169 of the larger tables had been compressed. ��Enabling DB2 Deep Compression during database migration and/or Unicode conversion is a very smooth way to compress the data at the time it is loaded into the database. The R3load tool provides several ways of deploying DB2 Deep Compression when the data is loaded into the tables. Depending on the version of R3load (i.e. version 6.40, or version 7.00 or higher), different options for compression are available, such as the new R3load 7.00 “SAMPLED” option. ��This offers optimal data compression while avoiding time-consuming table reorganizations. In this paper we will focus on the compression feature of R3load version 6.40, as this was the tool used by CCBCC.��R3load 6.40 with compress option�To generate the compression dictionary, R3load first loads a defined number of rows into the table without compressing them. R3load creates the compression dictionary based on these rows by running an offline reorganization. ��CCBCC incremented the value of the environment variable “DB6LOAD_COMPRESSION_THRESHOLD” to define the number of rows that would be initially loaded and used to create the dictionary. The default value for this threshold is 10,000 records, which was too low to provide optimal compression sampling for the larger tables. ��By sampling between 10 and 80 percent of the records (depending on the number of rows in the tables), CCBCC was able to set optimal threshold values and achieve very good compression results. The two largest tables (COEP, BSIS) contained more than 130 million records, followed by several tables with between 10 and 70 million records.�CCBCC grouped the compressible transparent tables using the following row count thresholds:� Group of 20 tables of more than 3 million records; threshold = 3 million Group of 47 tables of more than 200,000 records; threshold = 200,000 Group of 102 tables of more than 60,000 records; threshold = 60,000 �Note that not all tables matching the thresholds were flagged for compression and added to those groups. Only the ones which showed good compression results in the test phase were selected.��After the initial import and the creation of the dictionary, R3load imports the remaining rows into the table and DB2 compresses the data based on the dictionary. ��Tables that are intended for compression during the load phase must have the compression attribute switched on. Since CCBCC had some tables that should be compressed and others that should not, different template files for the Migration Monitor were used. ��CCBCC ran the import with several instances of the Migration Monitor, and used different values for DB6LOAD_COMPRESSION_THRESHOLD for each instance. ��Summary�Combining the Unicode upgrade with a database migration paid off for CCBCC – enabling the company to leverage synergies throughout the whole migration process, and eliminate the duplication of processes such as backup and testing. The whole ERP migration project took about eight weeks from start to finish, including the Unicode conversion. ��Another essential aspect was the easy transfer of database management skills from Oracle to DB2, and the user-friendliness of DB2. CCBCC had strong in-house Oracle skills, and yet in a matter of weeks the database administrators became fully competent on DB2 – a tribute to the ease of transition to DB2 for experienced DBAs, regardless of their technical legacy.�CCBCC was able to benefit right away from the value DB2 offers: Lower TCO 40 per cent reduction in database size Better performance – manufacturing runs are over 65 per cent faster Better integration of the database in SAP tools (SAP DBA cockpit for DB2) Reduced DBA workload to manage and administrate DB2 �With DB2 in place, CCBCC is well prepared for the upcoming upgrade to SAP ERP 6.0, which can now be performed much more smoothly and rapidly. The reduction in database size by 40 per cent will result in faster backup and shorter runtimes for the SAP software upgrade. Solutions/Offerings Hardware: System p: System p5 560Q Software: DB2 9 for Linux, UNIX and Windows Services: IBM-SAP Alliance
Page 22: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Better data compression means…

lower storage costs

and lower power costs

and less administration

and better performance

26

Presenter
Presentation Notes
- Naturally, compressing data lowers storage costs because you need less storage devices. - Compressing data can also help reduce storage-related energy costs (because less storage devices means less power consumed), impacting the environmental footprint of an organization. - Also there is less administration overhead because database backup and restore operations are now significantly quicker. - Many users of DB2's deep compression technology are also seeing performance improvements thanks to the improved I/O performance and more efficient memory utilization. Performance improvements like these can help you save even more by delaying hardware upgrades.
Page 23: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

DB2 and SAP

Page 24: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

“We expected an improvement of around 20% in terms of system response time, but we found that the new system was actually 40% faster. The DB2 database is even more efficient than we anticipated.”

—Peter Boegler, SAP IT

17

Presenter
Presentation Notes
As part of our close relationship, IBM and SAP technical teams work with one another to measure the performance of DB2 in SAP environments and to optimize DB2 for SAP environments. Did you know that SAP uses only DB2 to power its own SAP systems? SAP have provided IBM with several quotes that we can use publicly. This is one of them. It refers to their original goals for working with DB2 9 and then the actual results. “Our planned system response improvement was around 20 percent, whereas in reality we have observed a 40 percent cut in response times with DB2.” - Peter Boegler, Solution Architect, SAP IT
Page 25: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

IBM provides the leading systems for SAP environments

• Smarter systems are optimized for SAP environments– Joint SAP and IBM teams optimize systems– Lower costs and risk by using WebSphere

together with SAP NetWeaver

• DB2 is easy to administer in SAP environments– IT staff perform all DB2 admin tasks from the SAP

tools.– DB2 automates reorgs, tuning, statistics

collection, backups, etc.

• IBM systems offer more efficient storage– DB2 offer strong advantages when it comes to

minimizing storage

3018

“We feel that the close integration between SAP applications and DB2 offers easier management, which reduces our administration workload and cost.”

Sunita BahadurHead of ITSKH Metals

Presenter
Presentation Notes
IBM and SAP work together to optimize for SAP�SAP and IBM work together to optimize DB2 for SAP. The joint SAP and IBM teams co-located in Toronto, Canada and Walldorf, Germany make sure that DB2 and WAS is optimized and certified for each new release of SAP in a timely manner. WAS and DB2 are the only with such a close working arrangement. Other vendors typically take several months to become officially supported by SAP. WebSphere requires significantly less hardware than other solutions and provides greater reliability offering reduced risk due to system outages or lost transactions. It is therefore not a surprise that… IBM’s SAP Initiatives Team has worked with 400+ leading SAP customers and there are over 130+ joint WebSphere and SAP customer references globally. SAP Integrates DB2 into SAP Applications�By integrating DB2 into SAP applications, SAP offers you a single point-of-contact for support. IBM and SAP cooperate in every phase of the software lifecycle to provide you with the best possible integrated product offering. DB2 has more efficient storage�SAP environments often have large amounts of data. Some estimates claim that data storage costs represent almost half of enterprise IT infrastructure costs. DB2 typically outperforms other database software when it comes to minimizing data storage. DB2 is easier to administer�DB2 automates many DBA activities for SAP environments, including database reorganizations, memory tuning, collection of database statistics, database backups, log file management, and storage allocation. DB2 Makes SAP Faster�DB2 has consistent leadership of SAP SD 2-tier and 3-tier performance benchmarks. See for yourself at SAP Standard Application Benchmarks. DB2 offers High Availability and Disaster Recovery as standard�DB2 for SAP includes High Availability and Disaster Recovery capabilities for no extra charge. DB2 provides ultra-fast switchovers from the primary to standby systems; for example, in a SAP test environment with 600 users, services were resumed within 11 seconds. SAP uses DB2�There can be no greater endorsement than SAP choosing to use DB2 for all of its own SAP applications.
Page 26: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

DB2 optimizations reduce infrastructure requirements

31

The number cores required for IBM DB2 and Power 780

than Oracle Database on Sun M9000

SAP and DB2 on Power 780

1/4th

SAP Sales and DistributionERP 6.0 EHP4 2-Tier performance

Results as of 4/02/2010

IBM Power System 780, 8p / 64–c / 256–t, POWER7, 3.8 GHz, 1024 GB memory, 37,000 SD users, dialog resp.: 0.98s, line items/hour: 4,043,670, Dialog steps/hour: 12,131,000, SAPS: 202,180, DB time (dialog/ update):0.013s / 0.031s, CPU utilization: 99%, OS:

AIX 6.1, DB2 9.7, cert# 2010013; SUN M9000, 64p / 256-c / 512–t, 1156 GB memory, 32,000 SD users, SPARC64 VII, 2.88 GHz, Solaris 10, Oracle 10g , cert# 2009046; All results are 2-tier, SAP EHP 4 for SAP ERP 6.0 (Unicode) and valid as of 4/2/2010.

37,000

32,000

0

5,000

10,000

15,000

20,000

25,000

30,000

35,000

40,000

45,000

IBM DB2 onPower 780

Oracle Databaseon Sun M9000

SDUsers(1)

64 coresPOWER7

256 coresSPARC64

VII

SDUsers(2)

19

Presenter
Presentation Notes
Key Points: IBM reduce workload complexity. DB2 - 50+ features in every release optimizes DB2 for SAP workload. Single DB2 tunable for SAP. POWER 780 when running in Turbo Core model gives more cache per thread – great for databases. The new IBM results on the SAP Sales and Distribution (SD) Benchmark, shows the spectacular performance and price performance leadership of IBM DB2 and Power combination. Compared to Oracle/Sun IBM can save clients real and significant money in SAP environments by supporting 20% more users while delivering better performance, but only using 1/8 the number of CPUs as a comparable Sun system. These results are delivered not only through IBM's technology leadership, IBM integrated and optimized hardware and software, but also our close partnership with SAP. The SAP Sales and Distribution (SD) Benchmark is the current SAP ERP 6 EHP4 benchmark that is designed to simulate a more complete client environment in a sales and distribution ERP environments. This benchmark simulates a complex customer order, sell from stock scenario and is a rigorous test for hardware and database solutions. All systems about use the current benchmark version.
Page 27: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

DB2 is the most FLEXIBLE database on the market…

Page 28: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Oracle Database Features Supported by DB2 9.7

Page 29: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

DB2 SQL/PL Compatibility Results

• Different size applications tested– Biggest: 185,000 PL/SQL

statements– Smallest: 2,000 PL/SQL statements

• Variety of EAP participants:– Different industries– Different types of solutions– Different countries

• PL/SQL support results:– More than 750,000 lines tested– Average compatibility: 98.43%

The IBM DB2 9.7 compatibility is amazing — and there are no queries or DB2-specific code in our applications! Everything is compatible with Oracle and DB2 9.7.” - Gene Ostrovsky, VP R&D, ExactCost

Page 30: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Competitive Platform Support

Page 31: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Itanium-based systems are in the news again, the headlines and rhetoric

continues to increase

“On Oracle dumping Itanium: … In a report that I wrote last April I said: ‘If I’m running an Oracle database, Oracle infrastructure, or Oracle business applications, I’d be a little concerned about the kind of support and tuning I’d be getting from Oracle on Itanium in the long run.’ It is time to dump Itanium …”

-- Joe Clabby, President Clabby Analytics, April 2011

December 2009Red Hat to Drop Itanium Support

April 2010Microsoft Dropping Support for Intel’s Itanium chip

March 2011

Oracle Stops All Software Development For Intel Itanium Microprocessor

March 2011

HP Supports Customers Despite Oracle’s Anti- customer Actions

April 2011Oracle's Itanium Move Shakes Up IT Agendas

Page 32: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Summary

• TCO –

DB2 has lower TCO compared to Oracle.

• There are a number of factors that make DB2 a compelling alternative to Oracle…

– Preserves investment in people skills when moving to DB2– Lowers administrative costs up to 55% – Significantly better compression results vs

11gR2 advanced compression

– Built in Oracle compatibility features eg. PL/Sql

compiler– Maintenance included in first year License cost– Better performance with less hardware– 1/3 the cost of Oracle DB / options– Virtually unlimited scalability w/pureScale– Choice –

the ability to choose the best technology for you business

– pureXML, better virtualization, and more………..– Competitive Platform Support

• Moving to DB2 is the right choice, and over 1000 Oracle customers made that choice in the past two years.

Page 33: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350

Recommended actions

No-charge offers to help you get started:

• Proof of Concept• Business Value Assessment Tool• Database Migration Analysis

– Tool analyses PL/SQL, indicating out-of-the-box compatibility

– Compatibility levels average 98%• IBM Workshop for Oracle Professionals

– DB2 DBA certification

• For additional information including white papers and demos, please visit www.ibm.com/breakfree

45

Presenter
Presentation Notes
IBM offers just such a detailed analysis as part of its Business Value Assessment process. To have IBM perform a complementary Business Value Assessment for your environment, please contact your IBM representative.We've got a number of other things that we can do at no charge to you, including a consolidation evaluation. We can come to you and we can evaluate your environment, let you know what would be involved, and we have got some great tools that will allow us to do this with great specificity. At the same time we can even execute a migration workshop and give you a toolkit associated with that. We can give you a database migration analysis. With this database migration analysis, we have a tool that you can run against your code and that's going to tell what your out of the box compatibility is. So it's going to say: "Hey, you know, xx% of your code is compatible out of the box, you are only going to need to tweak x%, and here are those particular statements you're going to need to tweak.“ We've got a lot more great information as well on ibm.com/breakfree, including all of the white papers from which I've drawn charts from here, so make sure to go visit us.
Page 34: Paul Rivot - IBM · Paul Rivot. Director ... When IBM adds features to DB2, ... Active/Active Replication Q-Replication with DB2 $11,100 Golden Gate $21,350