schema management_eliminating the dba nightmare of udcl

53
1

Upload: fallenlord

Post on 30-Sep-2015

9 views

Category:

Documents


0 download

DESCRIPTION

Schema Management_Eliminating the DBA Nightmare of UDCL

TRANSCRIPT

  • 1

  • 2

  • The reader will find statements and opinions throughout this presentation these are

    purely my own and CA Technologies can not be hold responsible.

    You might find differences between the IBM documentation and statements/scenarios

    in this presentation since most of the content is derived from using live DB2

    subsystems and the maintenance level can be different from yours.

    Please pay attention to the fact that EVERY schema change isnt covered. The focus is

    on tablespace, table and index alterations.

    3

  • Working with DB2 that being a DBA, systems programmer or application developer,

    we always want more. DB2 has continuously improved and a huge amount of new

    features have been delivered over the past decade or two. However often when we

    see a new feature we get very excited and cant wait to adopt and exploit.

    This being said often theres a catch, so this presentation will also look into

    WHERE the grass isnt necessarily greener (or as green as it seems) due to

    limitations, changes and things you need to consider . . . . . . . . . . .

    4

  • The focus in this presentation is to:

    1) Have a look at what has changed over the past few releases since it is becoming harder and harder to memorize what you can do and what is not possible.

    2) The SYSADM auth is a dying race so we will look into some of the new security components which will limit what we as a DBA can do but make it easier to please the auditors, with the goal still being able to get the job done.

    3) Considering all the improvements in terms of online schema changes there really isnt a need for a schema management synchronization tool . . . . . Or is there ?

    4) Whats on Steen wishlist for Christmas making the DBA life even easier avoiding the need to DROP-CREATE.

    5

  • Looking at what we could do in terms of ALTERATIONS twenty-five years ago and

    now is like day and night.

    If you ask if the DBA-life has become easier the answer is both yes and no. Looking

    at the past where we had a limited number of changes which could be done (due to the

    limitations), it looks like CANDYLAND when looking at all the ALTER capabilities

    BUT at the same time a load of limitations exist depending on the object and

    dependences, so it is a lot harder to respond to an ALTER request without spending a

    lot of time analyzing the environment.

    Lets look at the major evolvement over the past 25 years starting with DB2 V3.

    6

  • As you can see, DB2 V3 had very few schema changes you could do without dropping

    the objects.

    Interestingly, there are some parameters that either have disappeared (DSETPASS) or

    are close to be eliminated (the different PROCs). The reason why I believe the

    PROCs are close to extinction is that many new features in DB2 9 and 10 can only be

    used when these PROCs are not in play.

    Another interesting piece is ADD COLUMN to a table without the drop if it has Not

    Null With Default and being added as the last column in the table. More about this

    later since some dramatic changes were introduced after DB2 V8 became GA.

    7

  • If you look at what could be ALTERED and additions to the alteration possibilities

    between the first release and DB2 V7 in fact almost nothing changed. It seems like

    IBMs focus was elsewhere, but something started to change in the 90s.

    The number of tables in each subsystem exploded and what used to be a large table

    with a few million rows were starting to be considered small. Billions of rows and

    thousands of tables started to become the norm meaning ALTERING a table/index

    became a nightmare due to U-D-C-L and the outage associated.

    When looking at the alter changes between the beginning and DB2 V7, these were

    mostly related to other new features introduced:

    a) DB2 became real relational with PK/FK so now you could add/remove foreign

    keys and primary keys.

    b) Identity columns were introduced so it became possible to modify the values

    c) DB2 started to move validation into the engine hence constraints

    d) Number of partitions grew from 16 to 64 to 254

    e) Data Sharing was introduced in DB2 V4 DB2 was being considered a real

    transaction engine besides IMS/DB

    8

  • Quite a few improvements were introduced between DB2 V3R1 and DB2 V7.

    Some of these ALTERATIONS were intrusive meaning the object ended up in a

    restrictive state as CHECK-PENDING (like adding a foreign key, table and column

    constraints), while other alterations could be postponed for implementation (like

    COMPRESS, MAXROWS).

    Other alterations resulted in a different outage like REORG-PENDING (like

    LIMITKEY changes).

    9

  • So considering the dramatic increase in the number of objects, the size and importance

    of DB2 being THE DATABASE ENGINE, the pressure was on to be able to do more

    schema changes online and DB2 V8 made a giant leap in this direction but for a

    cost (hence my question is the grass really greener).

    The major changes introduced in DB2 V8 were related to table-columns and index

    alterations, so well look into these in detail and also look at the implications like

    restrictive states and performance considerations (still in play for DB2 9 and 10).

    More important these new schema alterations have some limitations which need to

    be considered (related to the another bullet point in this presentation if a schema

    tool really is needed anymore).

    Lets have a look !!

    10

  • One DBA nightmare was to continuously monitoring space making sure a

    tablespace (and index) didnt run out of extents. We all have experienced SQL-904

    unavailable resource due to max extents reached on a volume or for a VSAM dataset.

    The SLIDING scheme was a huge relief and now being used by most DB2 shops to

    some extent.

    For tables it became easier to alter constraints.

    VOLATILE was added so it wasnt necessary to keep on running RUNSTATS in order

    to not end up with tablespace scans while the table continued to grow.

    MQTs were added and different options to quicker get access to the data needed for

    combined table results.

    Security started to get a HUGE focus, so new features were added so users could only

    see certain data based on the LEVEL they were granted but this introduced issues

    like UNLOAD, REORG etc.

    Informational RI was introduced for a few reasons like QUIESCE TABLESPACESET

    and Query Rewrite related to MQTs.

    REGENERATE VIEW was made available to handle COLUMN alterations where

    VIEWS are referencing the altered columns (more on next slide).

    11

  • 11

  • Here are the most important ALTER capabilities:

    ADD and ROTATE partition

    UN-TIE the link between PARTITION SCHEME and CLUSTER

    Add a new column to an index

    Shorten the index size and better optimizer usage of indexes having VC-columns

    (PADDED NON PADDED)

    Increase COLUMN length and some attributes

    IDENTITY COLUMN nightmare by removing link to a table (introducing

    SEQUENCE objects)

    RESTART capabilities for Identity Columns (and by then Sequence Objects)

    This all look VERY good, but . . . . . . . . . .

    12

  • All these capabilities do look very good and do to some degree eliminate the need for

    DROP-CREATE, so lets look at when the NATIVE SQL ALTER cannot be done:

    1) Like mentioned earlier a lot of the new capabilities require the old PROCs to be

    removed.

    2) When foreign-primary key relationships are in place, the column cannot be

    altered.

    3) If the table-column is referenced in a MQT.

    4) LOB columns and ROWID cannot be altered as well as DATE and TIME

    5) Unless you have the appropriate DB2 9 maintenance in place or are still on DB2

    V8, Data Capture Changes had to be removed prior to the column change and a

    subsequent reorg needed before DCC could be enabled again.

    The limitation with Data Capture Changes was lifted February 2011 by a PTF to DB2

    9.

    13

  • So once we can exploit the neat features in DB2 V8 (adding new columns, altering

    column length and to some degree change the attribute etc.) there are a couple of

    issues to consider.

    If you are dependent on using DSN1COPY to clone / copy a pageset, remember

    to execute REPAIR utility with VERSIONS. The reason is that the source dataset

    might have SYSTEM PAGES and DB2 requires system pages to be reflected in the

    catalog (different VERSION columns exist I the catalog).

    Another issue to consider when ALTERING a COLUMN LENGTH is the

    performance degradation. Every time DB2 will retrieve a row, this row will have to be

    reformatted from the original format to match the altered column length/attribute. In

    order to avoid this overhead (which can be viewed by the tablespace status being

    AREO*) is to execute the REORG (or rebuild if index) to have all rows match the

    current definition.

    14

  • DB2 continued the pace of many changes but mostly new capabilities and not the

    same kind of schema changes as in DB2 V8.

    DB2 objects can be real SMS managed.

    It is now possible to turn off logging be very careful and request my NOT LOGGED

    presentation.

    A new tablespace type was introduced UNIVERSAL combining the best of two

    worlds (segmented and partitioning).

    To facilitate quicker INSERTS, APPEND can be added to the TABLE to ignore

    clustering insert sequence.

    CLONE tables added to facilitate online load replace and switch between two

    pagesets.

    Columns can be renamed like tables and column default value can be

    dropped/modified using online schema.

    A column can be specified as IMPLICITLY HIDDEN meaning SELECT * will not

    show the column.

    Indexes can now be compressed too (page size changes) and REGENERATED using

    ALTER.

    15

  • Finally security getting tighter and tighter, ROLES and TRUSTED

    CONTEXT/CONNECTION introduced to limit who can implement what schema

    changes and from where.

    15

  • And then my favorites from a DBA perspective :

    1) ADD a new column or existing table-column to an index without the drop (will

    have to be rebuild).

    2) Being able to rename table, column and index without the drop.

    This almost sounds too good to be true and there certainly do exist limitations lets

    have a look.

    16

  • The DB2 native ALTER TABLE RENAME COLUMN cannot be done if these

    conditions exist:

    -) Views are referencing the table-column

    -) The column has a FIELDPROC

    -) Index On Expression on the column

    -) Check Constraint on the column

    -) Trigger or MQT referencing the TABLE

    The TABLE cannot be renamed for almost the same reasons except that a column

    FIELDPROC is tolerated.

    For both table and column rename, this isnt possible when the table is a CLONE or

    has a CLONE this will be covered in more detail on the next slide.

    For a Universal Tablespace defined as BY GROWTH, the MAXPARTITIONS cannot

    be altered (increased) until DB2 10 NFM. On a side note to this topic Willie Favero

    made an excellent point a few months ago: NOT to specify more than needed in one

    go since DB2 thread storage will act as if all (MAX) partitions were allocated.

    17

  • Like mentioned on the previous slide, CLONE tables and BASE tables having a clone

    has quite a few more limitations in terms of schema changes. In fact NOTHING can

    be changed in terms of attributes for the base and clone (Please contact me for another

    IDUG presentation Ive done on CLONE tables).

    The only way to change anything on the BASE or CLONE (even PRIQTY and

    CLOSE) is to preserve the clone, its data, dependences, auths etc. then DROP the

    CLONE and perform the schema changes on the base. Then the CLONE can be added

    back and will adopt the base attribbutes.

    18

  • A few additions/changes have been made to DB2 9 after it became GA (if memory

    serves me well).

    -) A COLUMN DEFAULT value can be ALTERed.

    -) When ADDING a NEW COLUMN as the last column to a table, this has been

    legal since DB2 V1 but now this is considered an Online Schema Change like

    altering a column, so you will get table versioning with AREO* etc.

    -) Another NICE change introduced was the ability to do online schema changed even

    if DATA CAPTURE CHANGES was active

    -) A new column attribute was introduced to automatically register when the row was

    changed ROW CHANGE TIMESTAMP.

    19

  • Next topic is DB2 10 changes. In fact this is exactly how I envisioned the online

    schema changes introduced in DB2 V8 would behave.

    The big theme in DB2 10 is called DEFERRED ALTER meaning the attributes can

    be altered but are not in effect. Instead these are stacked in a new catalog table

    SYSPENDINGDDL and implemented at the next REORG or LOAD REPLACE.

    Since most new features in DB2 9 and 10 do require the use of UTS, it is now possible

    to also use the PENDING changes to convert the old type tablespaces.

    -) Segmented can be converted to UTS PBG using SEGSIZE 32 if the current attribute

    is less than 32.

    -) A simple tablespace can also be converted to PBG and since SEGSIZE isnt

    supported for simple it will be set to 32.

    -) Table Controlled partitioned objects (introduced in DB2 V8) can be converted to

    UTS PBR (Partition By Range).

    20

  • Besides the DEFERRED ALTERs in DB2 10 for tablespace conversions, its also

    possible to stack these changes:

    -) PAGE-SIZE / BUFFERPOOL-size changes for both tablespace and index.

    -) DSSIZE

    -) SEGSIZE

    -) MEMBER CLUSTER (used to be valid for SEGMENTED only but even though

    UTS in DB2 9 is a combination of SEGMENTED and PARTITIONED, it wasnt

    possible).

    Please still remember all the limitations listed still exist.

    21

  • I mentioned that the new online PENDING schema changes in DB2 10 are stacked in

    SYSPENDINGDDL under DSNDB06. It is NOT possible to remove one change for

    one object in case multiple changes are stacked it is necessary to DROP all

    PENDING changes and then add back those needed.

    These PENDING schema changes also introduce a new tablespace/index status :

    AREO which isnt disruptive unlike AREO* introduced in DB2 V8.

    A few other very neat features in DB2 10:

    -) A part of the LOB can be included in the base table (hence the name INLINE LOB).

    This column length can be modified.

    -) You can now potentially eliminate indexes since you can append a column to an

    index and still maintain the original number of columns making up the uniqueness.

    -) TEMPORAL tables seem to be one of the bigger items and praised by many.

    Having DB2 automatically track changes to a table so the application simply can

    request AS DATE OF xxxxxx to get the row image from that specific date. There

    are some performance issues to consider using the new SQL SELECT capabilities

    but that belongs to another presentation.

    22

  • Besides the TABLESPACE TYPE changes mentioned earlier for DB2 10, a universal

    tablespace partition by growth can be altered to become HASH ACCESS except when

    MEMBER CLUSTER is specified, APPEND on the underlying table exists, the

    columns selected for the HASH ACCESS have NULLs allowed.

    All the column change enhancements introduced in DB2 V8 can NOT be executed for

    a table defined as TEMPORAL (meaning it has a history table associated).

    Some additional restrictions introduced so ROTATE (introduced in DB2 V8) can NOT

    be done for tables having XML columns, Partition By Growth tablespaces, MQTs or

    table referenced by a MQT and finally tables having history tables associated

    (TEMPORAL TABLES).

    23

  • So lets have a look at the new ALTER capabilities in DB2 10 and how the changes

    are being stacked in SYSPENDINGDDL.

    This screen shot illustrates that DB2 9 doesnt allow a tablespaces bufferpool to be

    changed to a different size. This is one of the DB2 10 changes possible.

    As you can see (and we already did talk about many of the new features requiring

    UTS) the same is the case for changing the bufferpool size for an index (rebuild

    pending unless UTS in place).

    24

  • One interesting issue discovered is, the tablespace being altered will have to be in a

    complete stage and a table will have to reside in the tablespace otherwise the

    ALTER will fail. Once the tablespace is considered COMPLETE the alter will

    succeed.

    25

  • The previous slide illustrated some of the new messages associated with the new alter

    capabilities. This is a snippet from these new messages, and like I mentioned in my

    previous CLONE presentation, the new messages are much more descriptive than in

    the past and THANK GOD for that due to the number of possible actions.

    26

  • The new Db2 10 alterations (PENDING alterations) are just as easy to ALTER as the

    previous online schema changes. However DB2 doesnt really check if the change is

    already STACKED so the alterations are just stacked in SYSPENDINGDDL

    and implemented upon the next reorg / load replace.

    The later screen shot (browse from SYSPENDINGDDL) illustrates how all the

    changes are stacked so you can see what will happen when the object is reorganized.

    27

  • A surprise to me was that a NORMAL SHRLEVEL REFERENCE REORG isnt valid

    to implement the STACKED DDL changes. The new DB2 10 changes can only be

    implemented by running a REORG SHRLEVEL CHANGE.

    28

  • Once the REORG SHRLEVEL CHANGE has been executed the pending changes

    are really implemented as this panel illustrates. MAXPARTITIONS are now set to

    TWO and the bufferpool has been changed to be 8K

    29

  • I you have studied my SCHEMA ALTERATION presentations the past 5 years (Table

    controlled partitioning, clone tables, not logged etc.) you have noticed I have been

    talking about all the schema changes going into SYSCOPY. The DB2 10 schema

    changes are NO different SYSCOPY keeps on being the SHERLOCK HOLMES

    book detailing what has happened over time.

    You can see EVERY change implemented and a LOT of new attributes in STYPE and

    TTYPE in SYSCOPY . . . . . .

    30

  • As mentioned on the previous slide SYSCOPY has a wealth of information what has

    happened to an object over time. This is a COPY of the DB2 10 SYSIBM.SYSCOPY

    details for ICTYPE=A (meaning an ALTER has happened whether this is ONLINE

    or PENDING).

    You can see al the new DB2 10 PENDING changes are included as STYPE: B, M, S,

    I, F

    31

  • The SWEAT thing is you can stack multiple changes being available in DB2 10

    and you can even MIX the old schema changes as long as you do the OLD

    immediate changes ahead of the new DB2 10 alteration.

    32

  • Like mentioned on the previous slide in case you need to do schema changes being

    made available in DB2 V8 and DB2 9 make these PRIOR to the new DB2 10

    changes otherwise you will be told youre stuck.

    BUT if you end up in this situation, please remember you can DROP PENDING

    changes do your good old online schema changes and put back the new DB2 10

    changes.

    33

  • DB2 10 does introduce a new STATUS DB2 V8 and online schema changes

    introduced AREO* (Advisory reorg pending) and the PENDING changes has yet

    another new status AREO (Advisory reorg) but this is NON-INTRUSIVE and does

    NOT carry the performance penalties from AREO*

    What is INTERESTING is : both the AREO* and the new DB2 10 AREO status can

    BOTH be reset using the REPAIR statement !!!!!

    Remember for the AREO* introduced in DB2 V8, the performance penalty will

    NOT go away !!!!

    The sweat thing about AREO status is, the object is still 100% updatable.

    34

  • Earlier we looked at how easy it is in DB2 10 to modify some of the un-alterable

    attributes. It is just as easy to REMOVE all the pending changes note ALL since

    its not possible to pick one stacked change and remove it.

    The interesting change is that values can also be lowered. Unlike DB2 V8 schema

    changes where column length can be increased but not decreased DB2 10 and

    PENDING changes do allow values to be decreased.

    35

  • So far we have covered a lot of schema changes but we need to

    realize theres a huge focus on SECURITY and DB2 has done

    a lot to deal with security and compliance / regulations etc.

    Auditors as well as regulations across the world is constantly

    changing the picture so we need to look into the options

    provided to please the auditors (or rather we are FORCED to)

    and how we still will be able to do our job without jumping

    through too many hoops and loops.

    Having access to REAL data is becoming a real issue and

    scrambling / restrictions / masking are hot topics.

    The SYSADM authorization was widely used in the early days

    of DB2 this was convenient for the DBAs to get the job done

    especially since some features like creating VIEWS with a

    36

  • different ID needed SYSADM and the same for GRANT when

    security was implemented as part of the database engine.

    Then secondary auth-ids got more used in order to limit the

    GRANTS but this was still a security exposure.

    EXPLAIN also used to require the underlying authorizations

    making explain a difficult task at some sites.

    Another topic starting to get hot is the need / requirement to

    TRACK who implemented which schema changes and what the

    schema change was.

    Some of these challenges have been semi-addressed over the

    past DB2 releases, but DB2 10 is probably the biggest leap - so

    lets have a look at the security evolution..

    36

  • DB2 9 introduced ROLES and TRUSTED CONTEXT.

    It allows to give certain users the authorization to perform certain tasks which

    otherwise would require SYSADM.

    Furthermore these authorizations could be given to specific IP-addresses to make

    sure only authorized users performed DDL changes.

    For a PURE MAINFRAME environment, the TRUSTED CONTEXT (or I would

    like to call it trusted connection) it is possible to give a user specific DDL auths from

    specific BATCH JOB NAMES.

    Even SPUFI allows the option to use ROLES and TRUSTED CONTEXT by having

    the AS USER appended to the SQL. In order to do the connect AS USER, the auth-id

    must have a ROLE defined.

    37

  • Many discussions have happened at the DB2-IDUG listserver, and it is pretty clear

    that this topic is not easy to digest and fully grab.

    My personal opinion is that this security concept is more suited for distributed access

    to DB2 on z/OS meaning controlling who can create /drop / alter objects from

    outside the box.

    I have seen a few customers who are working on using TRUSTED CONTEXT for

    their DBAs in order to eliminate the issues with not having SYSADM access. The

    problem is (as discussed earlier) that DBADM has limited authorizations to handle

    VIEWS and GRANTS where the qualifier/creator isnt their ID.

    This trusted context CA_DBA will allow user=STEEN to use authorizations granted

    to RASST02 for specific job names (which can be generically specified using

    wildcarding).

    External security is supported as well. As already mentioned, this concept is no easy

    to absorb, so my opinion is that DB2 10 will be a better path to control SYSADM and

    who can execute what in DB2 without having SYSADM like we have known

    SYSADM for three decades..

    38

  • At first sight when I noticed the security enhancements in DB2 10 I got pretty excited

    since now it will be possible for the DBAs to do their job and still please the

    auditors at the same time

    So to stick with the the grass is getting greener now we can be in a position to

    avoid the auditors chasing us poor DBAs. Especially since the ROLES concept is a

    tough one we will find out during the next couple of slides.

    So lets have a closer look at how we can do schema changes without the powerful

    SYSADM and without the VIEW CHALLENGES related to DBADM.

    I really do recommend to study the few pages in the DB2 Admin Guide chapter 4 and

    5 to get a bigger and full picture of the new security enhancements.

    39

  • The very first parameter you need to understand is a new ZPARM parameter :

    SEPARATE_SECURITY.

    It is important to understand IF this parameter is enabled, you might no longer be able

    to do what you have been used to until your security folks grant you the appropriate

    authorizations.

    The SYSADM has lost the power and must leave the throne all GRANTs related to

    security-objects cannot be handled by the SYSADM.

    The SYSCTRL will also lose some of its power.

    Bottom line before you take the path to enable this new ZPARM parameter, make

    sure the security folks have granted you access to what you need to be able to

    accomplish otherwise . . . . . . . You will be very frustrated.

    Lets look at the NEW DB2 10 security roles

    40

  • SQLADM privilege is more for the people looking at performance and SQL-tuning,

    so now you can isolate EXPLAN without access to the underlying data, execute

    RUNSTATS in order to see the EXPLAIN changes and also MODIFY the historical

    stats in SYSIBM.xxxxxSTATS_HIST

    EXPLAIN privilege can also do the EXPLAIN but doesnt have the power to execute

    RUNSTATS. The good thing is, the PREPARE/DESCRIBE functions can be executed

    but not the statement itself meaning no access to data.

    This should definitely help to keep the auditors off the back and still provide

    performance analysts / DBAs to do their due diligence and tune SQL statements.

    41

  • SECADM privilege will be one of the NEW KINGs and hold on to the throne. This

    person will be the only one who can GRANT everything related to special object

    securities like ROLES, Data Masking, ROW permission etc. - but has NO ACCESS to

    data or ANYTHING else.

    DATAACCESS privilege can be granted to those profiles needing to browse/edit

    data, plans, packages, stored procedures and User Defined Functions. The GRANT

    statement is very straight forward : GRANT DATAACCESS to user-id. It cannot be

    granted to PUBLIC or using WITH GRANT OPTION.

    ACCESSCTRL privilege is kind of identical to SECADM except that SECURITY

    OBJECTS cannot be handled. This person can grant to roles and auth-ids for

    everything that isnt covered by the new STYSTEM DBADM and DATAACCESS as

    well as ACCESSCTRL.

    42

  • Here is the new AUTH which probably will be widely used and take over from

    SYSADM where this still lives.

    SYSTEM DBA can do all the schema changes (except those included by SECADM),

    so all the old DBADM functions plus those components discussed earlier related to

    VIEWS which has been requiring SYSADM.

    A number of restrictions apply to the SYSTEM DBA permission - some utilities can

    NOT be executed and will have to be granted specifically (like old DBADM).

    43

  • So there are for sure a lot of new security parameters to consider in order to get out of

    the SYSADM challenges.

    The question is are these features really better for the DBAs ? As always it depends

    it really depends on how much freedom you have had in the past since there are a lot

    of limitations to these new capabilities, so it is necessary to plan accordingly to find

    out who needs which permissions. It was a lot easier to simply grant SYSADM, but on

    the GREENER side is that the auditors should be easier to please.

    Considering the new PENDING changes and the number of schema changes

    available it is becoming harder to understand the impact what is needed since certain

    schema changes need to be done in a specific sequence, some native DB2 ALTER

    commands cannot be done if certain object types are dependent (like clone, MQT)

    so being able to predict the schema change impact without thorough investigation is

    not getting any easier.

    44

  • Bottom line we are getting closer to the elimination of UNLOAD-DROP-CREATE-LOAD, which is the ultimate goal

    45

  • This is my wish list and when DB2 V8 started to talk about online schema changes,

    the way DB2 10 is using PENDINGDDL is really how I envisioned DB2 V8, so I

    wouldnt mind the online schema changes implemented in DB2 V8 and DB2 9 to be

    handled like the new capabilities in DB2 10.

    Im also hoping to be able to REMOVE a column from both table and index.

    Another topic of interest is DROP PARTITION. Personally I dont like ROTATE, so

    being able to ADD a new partition at the end as well as in the middle would solve

    some of my issues.

    I advocate ADD PART and empty low-logical partitions ahead of ROTATE for the

    ease of use due to physical/logical partitions getting out of sync. One issue to consider

    is the Image Copy and Recover scenarios now you have dead partitions but these

    still need to be handled.

    46

  • One question is considering all these capabilities to avoid the drop is there really a

    need for a tool to migrate / synchronize database structures.

    My answer is DEFINITELY. Looking at all the limitations and restrictions as well as

    statuses it is not a simple task to manually perform all the needed functions.

    47

  • 48

  • 49

  • 50