database performance dbms hardware application. conclusions wisc & as3ap observation: database...

55
Database performance DBMS Hardware Application

Upload: daniela-stafford

Post on 20-Jan-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Database performance

DBMS

Hardware

Application

Page 2: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Conclusions WISC & AS3APObservation:

Database system performance differ widely

A benchmark suite is a collection of database tasks which

• should have a precisely articulated goal

• should be minimal

• should be scalable

• should have an associated metric

Page 3: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Performance Benchmarks• Commonly used performance measures:

– Response time (delay from submission of transaction to return of result)

– Throughput (transactions per second, or tps)

– Availability or mean time to failure

– Speedup (linear->twice as much resources reduces time half)

– Scaleup (response time remains constant with increasing load and resources)

– Sizeup (doubling the size does not double required resources)

Page 4: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

• Beware when computing average throughput of different transaction types

– E.g., suppose a system runs transaction type A at 99 tps and transaction type B at 1 tps.

– Given an equal mixture of types A and B, throughput is not (99+1)/2 = 50 tps.

– Running one transaction of each type takes time 1+.01 seconds, giving a throughput of 1.98 tps (= 2/1.01).

– Interference (e.g. lock contention) makes even this incorrect if different transaction types run concurrently

Performance Benchmarks (Cont.)

Page 5: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Metric Selections• Arithmetic mean

• Geometric mean

• Harmonic mean

Page 6: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Metric Selections• Arithmetic mean

• Geometric mean

Page 7: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Metric Selections• Geometric mean

• Move away from zero to “level” impact

Page 8: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Metric Selections• Criteria

– Easy to explain in mathematical terms to users

– Non-hypersensitivity

– Scalability translates to easy change of metric

– Balance to capture delta changes in outlier positions

– Easy translation to operational decisions

• How to relate performance metric to an application field ?

Page 9: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Database Application Classes• Online transaction processing (OLTP)

– requires high concurrency and clever techniques to speed up commit processing, to support a high rate of update transactions.

• Decision support applications

– including online analytical processing, or OLAP applications, require good query evaluation algorithms and query optimization.

• Embedded applications

– Requires small footprint, small database storage

Page 10: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Benchmarks Suites• The Transaction Processing Council (www.tpc.org)

benchmark suites are widely used.

– TPC-A and TPC-B: simple OLTP application modeling a bank teller application with and without communication

• Not used anymore

– TPC-C: complex OLTP application modeling an inventory system

• Current standard for OLTP benchmarking

Page 11: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Benchmarks Suites (Cont.)• TPC benchmarks (cont.)

– TPC-D: complex decision support application• Superceded by TPC-H and TPC-R

– TPC-H: (H for ad hoc)• Models ad hoc queries which are not known beforehand

– Total of 22 queries with emphasis on aggregation

• prohibits materialized views• permits indices only on primary and foreign keys

– TPC-R: (R for reporting) same as TPC-H, but without any restrictions on materialized views and indices

– TPC-W: (W for Web) End-to-end Web service benchmark modeling a Web bookstore, with combination of static and dynamically generated pages

Page 12: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

TPC Performance Measures• TPC performance measures

– transactions-per-second with specified constraints on response time

– transactions-per-second-per-dollar accounts for cost of owning system

• TPC benchmark requires database sizes to be scaled up with increasing transactions-per-second

– reflects real world applications where more customers means more database size and more transactions-per-second

• External audit of TPC performance numbers mandatory

– TPC performance claims can be trusted

Page 13: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

TPC Performance Measures• Two types of tests for TPC-H and TPC-R

– Power test: runs queries and updates sequentially, then takes mean to find queries per hour

– Throughput test: runs queries and updates concurrently

• multiple streams running in parallel each generates queries, with one parallel update stream

– Composite query per hour metric: square root of product of power and throughput metrics

– Composite price/performance metric

Page 14: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a
Page 15: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a
Page 16: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a
Page 17: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a
Page 18: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Other Benchmarks• Mysql, SQLite, contain (test) benchmarks

• Benchmarks for XML e.g Xmark

• http://www.xml-benchmark.org

• Invent your own benchmark to understand a portion of the system behavior

Page 19: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Benchmark Blackholes• Areas for (practical) research

– System components, e.g.view maintenance and optimizer stability

– Conformancy tests for language standards

– Scalable catalog management

– Confined resource performance, e.g. running on an embedded device.

– Using MonetDB’s storage model on traditional systems.

Page 20: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Performance Tuning

Page 21: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Canary monitoring• GOAL: give a warning if the system is reaching it limits in terms of

user-responsiveness, either due to unexpected load or reaching resource limits.

We need Tweety.sql !

Page 22: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Canary monitoring• GOAL: give a warning if the system is reaching it limits in

terms of user-responsiveness, either due to unexpected load or reaching resource limits.

• Rules:

– Don’t use functions outside the realm of SQL

– It should run within a few seconds on lightly loaded system

– As little intrusion as possible to others

Page 23: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Canary monitoring• Solution: it should touch all the components of the system,

while still not being resource intensive.

– Database construction and population

• DBtapestry 100k x 2 = 1Mb

– Transactional update/abort

– 4-way joins, arithmetic and output

– (Clustered-)Index management

– Bulk update

Page 24: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

2/5/2005 8:00:44 AM until P4 3.2Ghz SQLserverBeta2

00:00.0

00:01.7

00:03.5

00:05.2

00:06.9

00:08.6

00:10.4

00:12.1

00:13.8

00:15.6

00:17.32:

13 P

M

2:57

PM

3:41

PM

4:26

PM

5:17

PM

6:03

PM

6:47

PM

7:32

PM

8:16

PM

8:59

PM

9:43

PM

10:2

6 P

M

11:0

9 P

M

11:5

3 P

M

12:3

6 A

M

1:19

AM

2:03

AM

2:46

AM

3:30

AM

4:13

AM

4:56

AM

5:40

AM

6:23

AM

7:07

AM

7:50

AM

TotalBulk-UpdJ4MOClu-IdxPopulationReportIndex

Its first steps in a cruel world

Too high variance

Page 25: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

2/5/2005 8:00:44 AM until P4 3.2Ghz SQLserverBeta2

00:00.0

00:43.2

01:26.4

02:09.6

02:52.8

03:36.0

04:19.2

05:02.4

05:45.6

06:28.8

07:12.01:

25 P

M

2:13

PM

2:56

PM

3:38

PM

4:23

PM

5:12

PM

5:57

PM

6:40

PM

7:24

PM

8:07

PM

8:49

PM

9:31

PM

10:1

3 P

M

10:5

6 P

M

11:3

8 P

M

12:2

0 A

M

1:02

AM

1:45

AM

2:27

AM

3:09

AM

3:51

AM

4:34

AM

5:16

AM

5:58

AM

6:40

AM

7:23

AM

TotalUpdatesClu-IdxJoinMathPopulationReportIndex

Running a single 1Mx32DBtapestry experiment

Page 26: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

This is what you expect on a quite Sunday

2/5/2005 8:00:44 AM until 2/6/2005 4:49:58 AM Crash P4 3.2Ghz SQLserverBeta2

00:00.0

00:01.7

00:03.5

00:05.2

00:06.9

00:08.6

00:10.4

00:12.1

00:13.8

00:15.6

00:17.37:

57 A

M

8:46

AM

9:35

AM

10:2

4 A

M

11:1

3 A

M

12:0

2 P

M

12:5

1 P

M

1:41

PM

2:30

PM

3:19

PM

4:08

PM

4:57

PM

5:46

PM

6:35

PM

7:24

PM

8:13

PM

9:02

PM

9:51

PM

10:4

1 P

M

11:3

0 P

M

12:1

9 A

M

1:08

AM

1:57

AM

2:46

AM

3:35

AM

4:24

AM

TotalBulk-UpdClu-IdxJoinMathPopulationReportIndex

Page 27: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Februari 4-5 2005 after crash until feb 7 7:00 AM P4 3.2Ghz Beta2

00:00.0

00:04.3

00:08.6

00:13.0

00:17.3

00:21.6

00:25.94:

00 A

M

4:47

AM

11:0

1 A

M

11:5

7 A

M

12:5

3 P

M

1:49

PM

2:46

PM

3:41

PM

4:37

PM

5:32

PM

6:28

PM

7:23

PM

8:19

PM

9:14

PM

10:1

0 P

M

11:0

5 P

M

12:0

1 A

M

12:5

7 A

M

1:52

AM

2:48

AM

3:43

AM

4:39

AM

5:34

AM

6:30

AM

7:25

AM

TotalClu-IdxBulk-UpdJoinMathPopulationReportIndex

Tweety.sql died after 70 hrs, probably caused by SqlWb limitations 

The query has exceeded the maximum number of result sets that can be displayed in the results grid. Only the first 100 result sets are displayed in the grid.

Switch to SQLcmd

Page 28: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuesday 8 as of 6:00 AM - P4 3.2Ghz Beta2

00:00.0

00:04.3

00:08.6

00:13.0

00:17.3

00:21.6

00:25.96:

17 A

M

7:10

AM

8:03

AM

8:57

AM

9:50

AM

10:4

3 A

M

11:3

6 A

M

12:3

0 P

M

1:24

PM

2:18

PM

3:08

PM

3:58

PM

4:45

PM

5:32

PM

6:25

PM

7:18

PM

8:12

PM

9:05

PM

9:58

PM

10:5

2 P

M

11:4

5 P

M

12:3

9 A

M

1:32

AM

2:25

AM

3:19

AM

4:12

AM

TotalJ4MOBulkUpdUpd-IdxPopulationReportIndex

Webbrowsing and email

whole morning

On the phone for a long time

To Microsoft shop

Back from Microsoft shop

Page 29: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

What was running?• 2x Excell

• 2x Powerpoint

• 3x IE

• 1x Outlook

• 1x Performance tool

• 1x SQLcmd

• 1x Thunderbird mailer

• 1x SQLWb

• 1x Putty

• 1x Filezilla

• 1x Folder

Page 30: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Performance Tuning• Database systems and database applications tend to

perform sub-optimal. Learning how and where to look for improvements is essential. It may improve performance with a 10-fold easily.

• Performance tuning is adjusting various parameters and design choices to improve system performance for a specific application.

• Tuning is best done by 1. identifying bottlenecks, and

2. eliminating them.

Page 31: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Performance Tuning• Can tune a database system at 3 levels:

– Hardware -- e.g., add disks to speed up I/O, add memory to increase buffer hits, move to a faster processor.

– Database system parameters -- e.g., set buffer size to avoid paging of buffer, set check pointing intervals to limit log size.

– Higher level database design, such as the schema, indices and transactions (more later)

Page 32: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Bottlenecks• Performance of most systems (at least before they are

tuned) usually limited by performance of one or a few components: these are called bottlenecks

– E.g. 80% of the code may take up 20% of time and 20% of code takes up 80% of time

• Worth spending most time on 20% of code that take 80% of time

• Removing one bottleneck often exposes another

• De-bottlenecking consists of repeatedly finding bottlenecks, and removing them

– This is a heuristic

Page 33: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Identifying Bottlenecks• Transactions request a sequence of services, e.g. CPU, Disk I/O, locks

• With concurrent transactions, transactions may have to wait for a requested service while other transactions are being served

• Research issue: Can we model database behavior as a queueing system with a queue for each service ??

– request a service, wait in queue for the service, and get serviced

• Queuing simulation does not precisely predict overall behavior unless the you have long running transactions, because then it reduces the error rate and disturbance by the experimentation setting.

Page 34: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Queues In A Database System

Page 35: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Identifying Bottlenecks• Bottlenecks in a database system typically show up as very high

utilizations (and correspondingly, very long queues) of a particular service

– E.g. disk vs CPU utilization

– 100% utilization leads to very long waiting time:

• Rule of thumb: design system for about 70% utilization at peak load

• utilization over 90% should be avoided

• Ill-balanced resource allocation in the underlying operating system

Page 36: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tunable Parameters• Tuning of hardware

• Tuning of algorithms

• Tuning of schema

• Tuning of indices

• Tuning of materialized views

• Tuning of transactions

Page 37: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning of Hardware• Even well-tuned transactions typically require a few I/O operations

– Typical disk supports about 100 random I/O operations per second

– Suppose each transaction requires just 2 random I/O operations.

– Then to support n transactions per second, we need to stripe data across n/50 disks (ignoring skew)

• Number of I/O operations per transaction can be reduced by keeping more data in memory

– If all data is in memory, I/O needed only for writes

– Keeping frequently used data in memory reduces disk accesses, reducing number of disks required, but has a memory cost

Page 38: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

RAID Levels

RAID Level 1: Mirrored disks with block striping Offers best write performance.

Popular for applications such as storing log files in a database system.

Page 39: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

RAID Levels (Cont.)• RAID Level 5: Block-Interleaved Distributed Parity; partitions

data and parity among all N + 1 disks, rather than storing data in N disks and parity in 1 disk.– E.g., with 5 disks, parity block for nth set of blocks is stored on

disk (n mod 5) + 1, with the data blocks stored on the other 4 disks.

– Before writing a block, parity data must be computed • Can be done by using old parity block, old value

of current block and new value of current block (2 block reads + 2 block writes)

Page 40: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Hardware Tuning: Choice of RAID Level

• To use RAID 1 or RAID 5?

– Depends on ratio of reads and writes

• RAID 5 requires 2 block reads and 2 block writes to write out one data block

• If an application requires K reads and L writes per second

– RAID 1 requires K + 2L I/O operations per second

– RAID 5 requires: K + 4L I/O operations per second

Page 41: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Hardware Tuning: Choice of RAID Level

• For reasonably large r and w, this requires lots of disks to handle workload

– RAID 5 may require more disks than RAID 1 to handle load!

– Apparent saving of number of disks by RAID 5 (by using parity, as opposed to the mirroring done by RAID 1) may be illusory!

• Thumb rule: RAID 5 is fine when writes are rare and data is very large, but RAID 1 is preferable otherwise

– If you need more disks to handle I/O load, just mirror them since disk capacities these days are enormous!

Page 42: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

• Question: which data to keep in memory:

– If a page is accessed n times per second, keeping it in memory saves

• n * price-per-disk-drive

accesses-per-second-per-disk

– Cost of keeping page in memory

• price-per-MB-of-memory

pages-per-MB-of-memory

– Break-even point: value of n for which above costs are equal

• If accesses are more then saving is greater than cost

– Solving above equation with current disk and memory prices leads to:

5-minute rule: if a page that is randomly accessed is used more frequently than once in 5 minutes it should be kept in memory

Page 43: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Hardware Tuning: One-Minute Rule• For sequentially accessed data, more pages can be read per

second. Assuming sequential reads of 1MB of data at a time: 1-minute rule: sequentially accessed data that is accessed once or more in a minute should be kept in memory

• Prices of disk and memory have changed greatly over the years, but the ratios have not changed much

• Sequential bandwidth can grow to ca 1Gb second

Page 44: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning of algorithms• Data structures often expose hotspots

– The root of a B-tree

– The front of a message queue

– The log file

– Value encoding

• Hotspot detection is difficult for end-users and impossible to resolve directly

– Open source not necessary helps

Page 45: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning the Database Design

• Schema tuning– Vertically partition relations to isolate the data that is accessed most

often -- only fetch needed information.

– Improve performance by storing a denormalized relation

• E.g., store join of account and depositor; branch-name and balance information is repeated for each holder of an account, but join need not be computed repeatedly.

• better to use materialized views (more on this later..)

– Cluster together on the same disk page records that would match in a frequently required join,

• compute join very efficiently when required.

Page 46: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning the Database Design (Cont.)

• Index tuning

– Create appropriate indices to speed up slow queries/updates

– Speed up slow updates by removing excess indices (tradeoff between queries and updates)

– Choose type of index (B-tree/hash) appropriate for most frequent types of queries.

– Choose which index to make clustered

• Index tuning wizards look at past history of queries and updates (the workload) and recommend which indices would be best for the workload

Page 47: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning the Database Design (Cont.)Materialized Views• Materialized views can help speed up certain queries

– Particularly aggregate queries• Overheads

– Space– Time for view maintenance

• Immediate view maintenance:done as part of update txn– time overhead paid by update transaction

• Deferred view maintenance: done only when required– update transaction is not affected, but system time is spent on view

maintenance» until updated, the view may be out-of-date

• Preferable to denormalized schema since view maintenance is systems responsibility, not programmers– Avoids inconsistencies caused by errors in update programs

Page 48: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning the Database Design (Cont.)• How to choose set of materialized views

– Helping one transaction type by introducing a materialized view may hurt others

– Choice of materialized views depends on costs

• Users often have no idea of actual cost of operations

– Overall, manual selection of materialized views is tedious

• Some database systems provide tools to help DBA choose views to materialize

– “Materialized view selection wizards”

Page 49: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning of Transactions

• Basic approaches to tuning of transactions

– Improve set orientation !!!!!!!

– Reduce lock contention

• Rewriting of queries to improve performance was important in the past, but smart optimizers have made this less important

Page 50: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning of Transactions

• Communication overhead and query handling overheads significant part of cost of each call

– Combine multiple embedded SQL/ODBC/JDBC queries into a single set-oriented query

• Set orientation -> fewer calls to database

• E.g. tune program that computes total salary for each department using a separate SQL query by instead using a single query that computes total salaries for all department at once (using group by)

– Use stored procedures: avoids re-parsing and re-optimizationof query

Page 51: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning of Transactions (Cont.)• Reducing lock contention

• Long transactions (typically read-only) that examine large parts of a relation result in lock contention with update transactions

– E.g. large query to compute bank statistics and regular bank transactions

• To reduce contention

– Use multi-version concurrency control

• E.g. Oracle “snapshots” which support multi-version 2PL

– Use degree-two consistency (cursor-stability) for long transactions

• Drawback: result may be approximate

Page 52: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning of Transactions (Cont.)• Long update transactions cause several problems

– Exhaust lock space

– Exhaust log space

• and also greatly increase recovery time after a crash, and may even exhaust log space during recovery if recovery algorithm is badly designed!

Page 53: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Tuning of Transactions (Cont.)• Use mini-batch transactions to limit number of updates that a single

transaction can carry out. E.g., if a single large transaction updates every record of a very large relation, log may grow too big.

• Split large transaction into batch of ``mini-transactions,'' each performing part of the updates

• Hold locks across transactions in a mini-batch to ensure serializability

• If lock table size is a problem can release locks, but at the cost of serializability

* In case of failure during a mini-batch, must complete its remaining portion on recovery, to ensure atomicity.

Page 54: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Performance Simulation• Performance simulation using queuing model useful to predict

bottlenecks as well as the effects of tuning changes, even without access to real system

• Simulation model is quite detailed, but usually omits some low level details

– Model service time, but disregard details of service

– E.g. approximate disk read time by using an average disk read time

• Experiments can be run on model, and provide an estimate of measures such as average throughput/response time

• Parameters can be tuned in model and then replicated in real system

– E.g. number of disks, memory, algorithms, etc

Page 55: Database performance DBMS Hardware Application. Conclusions WISC & AS3AP Observation: Database system performance differ widely A benchmark suite is a

Learning points• Database structures and algorithms require constant

upgrades with evolving hardware and application requirements

• Database systems have many knobs to tune– The vendors are removing them !– The holy grail is self-management

• Database applications – Many modern database applications need set-level

thinking instead of object/record iteration.