hic-basis-manual.pdf

68
Introduction Intended Audience This manual is intended for staff performing systems administration tasks for the HIC’s SAP R/3 system (FINNET). The purpose of this manual is not to teach the reader about R/3 but to assist members of the team to perform their everyday duties. Scope This manual consists of ‘how to’ instructions (for example, how to ftp a file) as well as general descriptions of how R/3 has been deployed at the HIC (for example, the system landscape). It also contains some historical information (for example, steps used for the UNIX to OS/390 platform conversion) and some general system monitoring instructions (for example, the daily check list). This manual does not address (in detail) two important aspects of the FINNET system because these areas are not controlled by the systems admin team. These are :- . Security administration Tasks include the creation of activity groups, profiles and authorisations. Security administration is handled by the Financial Systems & Policy Development area. . User administration (in production) Tasks include the creation and maintenance of production users including the assigning of profiles. User administration is handled by the Financial Accounting Scheduling area (contact the FINNET help desk). Comments/Queries? Any comments or queries regarding this manual may be addressed to : . Brenda Stines (x 6124 6572) . Tony Bombardiere (x 6124 6539) O:\Basis R3 manual.lwp 1

Upload: marcelomoura

Post on 30-Sep-2015

234 views

Category:

Documents


7 download

TRANSCRIPT

  • Introduction

    Intended AudienceThis manual is intended for staff performing systems administration tasks for the HICs SAP R/3 system (FINNET). The purpose of this manual is not to teach the reader about R/3 but to assist members of the team to perform their everyday duties.

    ScopeThis manual consists of how to instructions (for example, how to ftp a file) as well as general descriptions of how R/3 has been deployed at the HIC (for example, the system landscape). It also contains some historical information (for example, steps used for the UNIX to OS/390 platform conversion) and some general system monitoring instructions (for example, the daily check list).

    This manual does not address (in detail) two important aspects of the FINNET system because these areas are not controlled by the systems admin team. These are :-

    . Security administrationTasks include the creation of activity groups, profiles and authorisations. Security administration is handled by the Financial Systems & Policy Development area.

    . User administration (in production)Tasks include the creation and maintenance of production users including the assigning of profiles. User administration is handled by the Financial Accounting Scheduling area (contact the FINNET help desk).

    Comments/Queries?Any comments or queries regarding this manual may be addressed to :

    . Brenda Stines (x 6124 6572)

    . Tony Bombardiere (x 6124 6539)

    O:\Basis R3manual.lwp 1

  • FINNET System Landscape(current as at August, 2001)

    The FINNET environment consists of 3 systems :- SDA - development SQA - testing/user acceptance/training SPA - production

    System Name

    Client No.

    Client Name Description

    SDA 310 Configuration

    All changes are allowed in this client. All configuration work is carried out in the client. CTS is activated in this client, and all changes require a change request number. Changes are transported to the Integration Testing client SQA 400.

    320 Sandpit

    Client dependant changes without automatic recording. All changes are allowed in this client without automatic recording. It will be used for testing / trying new configuration etc. before entering the configuration into the Configuration Client SDA 310.

    SQA 400 Integration/Full Cycle Testing

    This client is used by the HIC test team for testing the final configuration before it is transported to the Production client. Interfaces and conversions are tested within this client. No changes are allowed in this client. Any changes must be completed in the development client (SDA 310) and transported to this client

    500 User Acceptance/Volume Testing

    This client is used by the Finance Development area for testing the final configuration before it is transported to the Production client. No changes are allowed in this client. Any changes must be completed in the development client (SDA 310) and transported to this client

    600 Training Master This client is used as the base for all training delivery clients. It will contain the completed configurations as well as some basic transactional data. At the end

    O:\Basis R3manual.lwp 2

  • of each training session the training delivery clients will be refreshed from this client.

    610,620.... Training Delivery

    These clients are used when delivering training courses. These clients are client copies of the training master and are refreshed after each course.

    SPA 200 HIC Production

    This client is used to run the Company Business. No changes are allowed. All changes are to be done in the development client (SDA 310) and transported to the testing clients before being approved for transport to the Production system.

    Transport RoutesRoute Purpose Source SDA SQA SPA1 Changes transported to the sandpit

    client upon request. (Sandpit client is refreshed from configuration client periodically)

    310 320

    2 After Unit Testing changes are transported to SQA for full cycle testing by the HIC test team (in client 400), user acceptance testing (in client 500) and for training purposes (in client 600).

    310400500600

    3 Once Integration testing complete, transport changes to SPA after sign-off from Project Manager.

    310 200

    O:\Basis R3manual.lwp 3

  • FINNET Daily Checklist

    The following checklist needs to be performed each day as early as possible and then whenever during the day..

    Execute the following transactions by entering /n in the command field from any R/3 menu or by following the specified menu path starting at the main R/3 menu...

    1) sm21 (system log) review everything flagged red first, then the ones flagged yellow. Usually there are multiple messages generated for the one error (looking at the time column will help).

    A general point to remember : often to determine the problem we need to track down the user to see what they were doing when the error occurred. Hopefully the user name can be determined using the P00 number via txn su01 (click on the letter icon). If the name hasn't been captured here (very naughty!) try 'tso rw' on the mainframe and then update the info in su01

    refer to the section System Log Errors for more details

    2) st22 (dumps)Look up in OSS - ask the user what they did to cause the dump

    3) sm12 (lock entries)Check to ensure no locks have been hanging around for too long - if a lock has been held for over 2 hours then check with the user to see what they are doing. Always check before deleting a lock to ensure a user is not part way through an update or batch input session you really should never have to do this!!!

    4) sm13 (update records) check that the updating system is running (message Update is active). If it is not it may have been deactivated because of a database error - check with DBA. Reactivate updating after correcting the underlying problem that caused updating to be stopped.

    O:\Basis R3manual.lwp 4

  • check for err update records or a backlog of unprocessed updates. Display the list of All update records. Be sure to set the date back to the last check (should be previous working day). Set the client field to * to see records from all clients. In the list:

    an empty or nearly empty list shows all is okay records with status err indicate that the corresponding update(s) failed. Analyze the problem and ensure that the data in the aborted update is entered into the system (refer to the online help, Analyzing and Update Error and Re-Processing a Failed Update) a long list of updates with the status auto or init may indicate a performance problem with updating or the R/3 system in general. Usually updates should be cleared from the update manager almost instantly. Use the CCMS to check out suspected performance problems (refer to the CCMS Guide in the online help)

    5) sm37 (batch jobs)Look at all users, previous working day.Look for cancelled (ie abended) jobs

    6) sm35 (batch input)Housekeeping job RSBDCREO (SAP_REORG_BATCHINPUT) only cleans up successful sessions - ones in error must be deleted manually

    7) db02 (check DB2 Tablespaces)Under Tablespaces choose Detailed Analysis. Click on the %-Used column and then on the Sort button. Let the DB2 area know when the tablespaces are getting too full (for SPA this is around 80%). You can double click anywhere on a detail line and then use the History option to see how fast the tablespace is filling up.

    8) st02 (tune summary)Look for any problems highlighted in red. Double click on the highlighted field and then on the Current parameters button. This will show you which profile parameter you may need to change (search notes as well). Using the history icon (shift + F6) will show you if this is a one off type problem or if it has been occurring over the last couple of days.

    9) UNIX command df -k

    O:\Basis R3manual.lwp 5

  • (from the desktop: start, programs, accessories, telnet OR start, run, telnet. Then: connect, remote system. Enter hostname 10.136.5.109 for SDA/SQA and 128.1.2.109 for SPA)

    df -k | more shows sapdata areas (/db2//sapdata* where * is 1 to 6) and their percentage full. These should be kept below 85% full. The log and log archive files (/db2//log_dir and /db2//log_archive) also need to be monitored. Alert DBA if any problems.

    10) Keep your eye on the UNIX print queues throughout the day. Log onto the relevant node and do an lpstat. Check that none of the queues have status down (they should have status ready). Check that you can ping each printer (just because the status is ready does not mean that there is not a comms problem). Run script pingprinters.sh (in /home/adm). This script does the following :-

    #!/usr/bin/kshPROCID=$$lpstat | grep "@" | awk '{print $1}' | while read prnamdo

    echo "$prnam :-"ping -c3 $prnam >/tmp/prout$PROCID 2>&1if [ `grep -c "100% packet loss" /tmp/prout$PROCID ` -gt 0 ]then

    cat /tmp/prout$PROCIDfirm /tmp/prout$PROCID

    done

    The print queues sometimes go into a down status for no apparent reason. Because of this we have a script that enables the printers (/usr/sap/trans/admin/enableprinters.sh). This script is scheduled by cron to run every 15mins between 6am and 10pm and does the following :-

    #!/usr/bin/kshPROCID=$$lpstat | grep "@" | awk '{print $1}' | while read prnamdo

    enable $prnamdone

    O:\Basis R3manual.lwp 6

  • The following can be checked on an adhoc or weekly basis :-

    a) Carry out a consistency check of the TemSeIf there are any inconsistencies found, use the Delete All button to delete them from the database. Refer to OSS note 48400 for more details.

    b) Carry out a consistency check of the spool databaseIf there are any inconsistencies found, use the Delete All button to delete them from the database. Refer to OSS note 48400 for more details.

    The following can be checked on a monthly/quarterly basis :-

    Cleanup old files in /usr/sap/trans subdirectories data, cofiles, actlog and logThere is a script called cleanup in /usr/sap/trans/bin which will delete files which have not been modified in the last 90 days. The script does the following :-

    #!/bin/ksh

    # This script removes files from the specified subdirectory of# /usr/sap/trans. The script checks to ensure that only the subdirs# actlog, data, cofiles or log are specified as deleting files from# other subdirectories is risky. The script will delete files which# have not been modified for 90 days.

    # The script can be run in 2 modes - display or delete. Display mode# will detect the files which have not been modified for 90 days and# write the ouput to $LOGFILE - no files are actually deleted using# display mode. Delete mode does the same as display mode but the# detected files are actually deleted. The default mode is display.

    # Example calls :-# "cleanup actlog display" Will find files in /usr/sap/trans/actlog# which have not been modified for 90 days# and write the output to $LOGFILE. No# deletion of files will be done.# "cleanup actlog" Same as above (default mode is display)# "cleanup actlog delete" Will find AND DELETE files in /usr/sap/# trans/actlog which have not been modified# for 90 days. Output is written to $LOGFILE.

    DTTIME=`date "+%d/%m/%y %T"`

    O:\Basis R3manual.lwp 7

  • # Check that a parm 1 (subdirectory) was specified# ------------------------------------------------if [ x$1 = x ]then echo echo '' echo '
  • # At this stage we have a valid subdir and a valid mode!!!# --------------------------------------------------------PATHNAME=/usr/sap/trans/$1LOGFILE=$PATHNAME/cleanup.logecho "$DTTIME" > $LOGFILEecho >> $LOGFILE

    if [ $MODE = display ]then echo 'Running in DISPLAY mode. Files have NOT be deleted.' >> $LOGFILE echo >> $LOGFILE find $PATHNAME -name \* -mtime +90 -print >> $LOGFILE echo echo '' echo '' echo '' echo ''else echo 'Running in DELETE mode. Files have been DELETED.' >> $LOGFILE echo >> $LOGFILE find $PATHNAME -name \* -mtime +90 -print -exec rm -f {} \; >> $LOGFILEfi

    echoecho ''echo '

  • month. Get stats for the various task types (ie dialog, background, update, etc). Forward the average dialog response time to the midrange team for SLA reporting purposes.

    IP Addresses for File Transfers

    The following IP addresses are relevant when doing an ftp :-

    IP addresses of the UNIX nodes10.136.5.109 hicspd09 UNIX node 9 Deakin (SDA and SQA)128.1.2.109 hicspn09 UNIX node 9 Tugg (SPA)

    10.136.4.130 flemming Deakin (SDP)10.140.21.47 darwin Tugg (SPP)

    IP addresses of the mainframes10.136.44.1 HIC 410.136.45.1 HIC 5 10.136.46.1 HIC 6

    Because DASD on the mainframe is shared between HIC 4, HIC 5 and HIC 6, you can use any one of the above addresses. This means that if HIC 5 is unavailable, you can use the HIC 4 or HIC 6 address.

    O:\Basis R3manual.lwp 10

  • UNIX to Mainframe Transfers

    Files from UNIX can be transferred to the mainframe either by :-1) doing a manual ftp from the UNIX2) doing a manual ftp from the mainframe

    UNIX TO MAINFRAME FILE TRANSFER VIA MANUAL FTP FROM UNIXTo transfer the UNIX file test in directory /usr/sap/trans/tmp of SDA to the mainframe file P00403.TEST, do the following :-

    1) log on to the appropriate UNIX node, for example, log on to 10.136.5.109 (SDA)

    2) cd to the directory where the file to be transferred is, for example, cd /usr/sap/trans/tmp

    3) type in ftp , for example, ftp 10.136.45.1 (HIC5)

    4) enter your mainframe userid and password when prompted

    5) type in bin if doing a binary transfer (otherwise use asc for ascii)

    6) type in put .for example, put test P00403.TEST NOTE : the mainframe filename must be in uppercase and in single quotes. The UNIX filename is case sensitive!

    7) the file will then be transferred, some diagnostics including the number of bytes transferred will be displayed

    8) type in bye to exit ftp

    9) your file should now be on the mainframe

    O:\Basis R3manual.lwp 11

  • UNIX TO MAINFRAME FILE TRANSFER VIA MANUAL FTP FROM THE MAINFRAMETo get the UNIX file test in directory /usr/sap/trans/tmp of SDA to the mainframe file P00403.TEST, do the following :-

    1) log on to TSO

    2) at the command line type in tso ftp

    3) Connect to? will be displayed - press enter and then enter the UNIX IP address, for example, 10.136.5.109 (SDA)

    4) enter a UNIX userid and password when prompted

    5) type in bin if doing a binary transfer (otherwise specify asc for ascii)

    6) type in cd , for example, cd /usr/sap/trans/tmp

    7) type in get .for example, get test P00403.TEST NOTE : the mainframe filename must be in uppercase and in single quotes. The UNIX filename is case sensitive!

    8) the file will then be transferred - type in quit to exit

    9) your file should now be on the mainframe

    O:\Basis R3manual.lwp 12

  • Mainframe to UNIX File Transfers

    Files from the mainframe can be transferred to the UNIX either by :-1) doing a manual ftp from the UNIX2) doing a manual ftp from the mainframe3) via a mainframe batch job (JCL)

    MAINFRAME TO UNIX FILE TRANSFER VIA MANUAL FTP FROM UNIXTo transfer the mainframe file P00403.TEST to the UNIX file test in directory /usr/sap/trans/tmp in SDA, do the following :-

    1) log on to the appropriate (target) UNIX node, for example, log on to 10.136.5.109 (SDA)

    2) cd to the directory where the file is to be stored, for example, cd /usr/sap/trans/tmp

    3) type in ftp , for example, ftp 10.136.45.1 (HIC5)

    4) enter your mainframe userid and password when prompted

    5) type in bin if doing a binary transfer (otherwise default is ascii)

    6) type in get ,for example, get P00403.TEST test.NOTE : the mainframe filename must be in uppercase and in single quotes. The UNIX filename is case sensitive!

    7) the file will then be transferred, some diagnostics including the number of bytes transferred will be displayed

    8) type in bye to exit ftp

    9) your file should now be in the specified UNIX directory

    O:\Basis R3manual.lwp 13

  • MAINFRAME TO UNIX FILE TRANSFER VIA MANUAL FTP FROM MAINFRAMETo transfer the mainframe file P00403.TEST to the UNIX file test in directory /usr/sap/trans/tmp in SDA, do the following :-

    1) log on to TSO

    2) at the command line type in tso ftp

    3) Connect to? will be displayed - press enter and then enter the UNIX IP address, for example, 10.136.5.109 (SDA)

    4) enter a UNIX userid and password when prompted

    5) type in bin if doing a binary transfer (otherwise specify asc for ascii)

    6) type in cd , for example, cd /usr/sap/trans/tmp

    7) type in put ,for example, put P00403.TEST testNOTE : the mainframe filename must be in uppercase and in single quotes. The UNIX filename is case sensitive!

    8) the file will then be transferred - type in quit to exit

    9) your file should now be on the UNIX

    O:\Basis R3manual.lwp 14

  • MAINFRAME TO UNIX FILE TRANSFER VIA JCL

    To transfer the mainframe file P00403.TEST to the UNIX file test in directory /usr/sap/trans/tmp in SDA, create some JCL which includes the following step :-

    //************************************************* //* nb. this jcl must remain 'caps off' because //* of the connection to a unix box. //************************************************* //FTP EXEC PGM=FTP,REGION=4096K,PARM='10.136.5.109 (exit' //SYSPRINT DD SYSOUT=* //OUTPUT DD SYSOUT=* //INPUT DD * P00403 ascii cd /usr/sap/trans/tmp put 'P00403.TEST' test quit /* //*

    Notes regarding the above JCL :-. the PARM= parameter specifies the target UNIX node, in this example we are

    transferring the file to SDA (10.136.5.109)

    . the P00403 specifies the UNIX logon id which is then followed by the associated password (note: the password must be in lowercase without quotes)

    . the ascii specifies that the file to be transferred is an ascii file. For binary files you must specify bin here

    . the cd /usr/sap/trans/tmp specifies the target UNIX directory

    . the put P00403.TEST test specifies the mainframe file to be transferred (P00403.TEST) and the UNIX filename to which it should be transferred (test)Note: the mainframe filename should be in uppercase and in singe quotes. The Unix filename is case sensitive!

    O:\Basis R3manual.lwp 15

  • Applying Support Packages (R/3)

    Refer to the following notes :-

    83458 OCS Info: Downloading patches from SAPnet13719 Preliminary transports to customers173814 OCS: Known problems with Support Packages Rel. 4.6

    Step 1 : Download support packages from sapnet to the desktop(If you are applying the support packages from CD then you dont need to do this step)

    use Netscape to go to service.sap.com/ocs (you will be asked for your mainframe userid/password and your OSS/SAPNET userid/password)

    click on Download Support Packages (LHS of screen)

    you will now see a list of support package types. The ones we use are :- Application Interface support packages (SAP_ABA, SAPKA*) Basis support packages (SAP_BAS, SAPKB*) R/3 support packages (SAP_APPL, SAPKH*) SPAM/SAINT updates

    Always apply the latest SPAM update available.

    click on the relevant support package type and then on 4.6B. You will now see a list of support packages on the RHS of the screen.

    left mouse click on the first icon (if you run the mouse over this icon you will see Right click to download file this DOES NOT WORK!!!!) On the next screen, click on the Download button. In the pop-up window, specify where you want the file downloaded to (usually o: drive, basis r3, 4.6b downloads from sapnet no need to change the filename.) Click the Save button and the download will begin.

    you can do up to 3 downloads at the same time. Just click again on 4.6B (LHS of screen) do NOT use the Back button.

    to ensure that there are no problems with the download, take note of the file size in SAPNET and compare it to the file size downloaded to o: drive (using explorer). The o: drive size should be + 1. For example, if the file size in SAPNET is 1,402 (KB) then (using explorer) the file size on o: drive should be 1,403.

    O:\Basis R3manual.lwp 16

  • Step 2 : Download the support packages to the target node

    (a) If you are downloading the support packages from CD :- NOTE : YOU STILL NEED TO DOWNLOAD THE LATEST SPAM

    VERSION FROM SAPNET BEFORE APPLYING THE SUPPORT PACKAGES open a DOS prompt change to the CD directory

    e: see what directories are on the CD

    dir the directories we are interested in are :-

    ABA_46BAPPL_46BBAS_46B

    (b) If you are downloading the support packages from the desktop (ie you have already done Step 1 above) :- open a DOS prompt change to the directory you specified as the target in step 1, usually

    o:cd Basis R3cd 4.6B Downloads

    (a) and (b) Continue... do a dir to see what files are in the directory dont forget to download the latest SPAM update ftp the required files to node 9 Tugg (128.1.2.109). Because this node is here at

    Tuggeranong, it is much quicker to download to it (rather than the development node which is in Deakin)

    dont forget to specify bin!!! because you cant log straight into the adm userid, you will need to log on as

    yourself (eg P00403), do the ftp to /usr/sap/trans/tmp/brenda (or any directory with sufficient space) and then move the files to /usr/sap/trans/tmp. You will probably need to do a chmod 777 on the downloaded files as yourself, then su to and move/copy the files to /usr/sap/trans/tmp.

    to check the file sizes after the download, compare the sizes from a dir in DOS and the file sizes in /usr/sap/trans/tmp

    O:\Basis R3manual.lwp 17

  • Step 3 : Unpack the downloaded support packages into the EPS inbox in /usr/sap/trans/EPS/in, check if there are any files that can be deleted cd /usr/sap/trans unpack the CAR files (dont forget the SPAM update KD00030.CAR)

    CAR tvf tmp/ will show you a list of the files contained in the CAR file CAR xvf tmp/ will unpack the files take a note of the unpacked file names (the ATT and PAT files). This can be useful if there are problems and you need to delete certain files from the EPS/in directory the unpacked files (ATT and PAT files) will now be in /usr/sap/trans/EPS/in

    Step 4 : Upload the support packages into the system log onto the system in client 000 (userid should have sap_all, sap_new but dont

    use SAP*) txn spam use Support Package, Load Packages, From application server this will get the files from /usr/sap/trans/EPS/in and make them available to

    SPAM

    Step 5 : Apply the SPAM update txn spam use Support Package, Import SPAM update

    Step 6 : Apply the BASIS Support Packages always apply the BASIS packages before the APPL and ABA ones txn spam click on the Display/Define button specify which packages you want in the queue confirm the queue on the main spam screen, use Support Package, Import queue

    Step 7: Apply the APPL and ABA Support Packages txn spam click on the Display/Define button specify which packages you want in the queue confirm the queue on the main spam screen, use Support Package, Import queue

    Step 8: Run sgen txn sgen use the Regeneration of existing loads option use the Only generate objects with invalid loads option

    O:\Basis R3manual.lwp 18

  • make sure you do this when there are no other users on the system (users a lot of dialog work processes)

    Step 9: Bounce the system do this as a precaution to clear buffers, etc looking in st02 may show that some buffers are full (due to running sgen) remember in SPA not to do a stopsap must do a stopsap r3. This is because

    of some problems with permissions. If you do a stopsap and then a startsap the database backup will fail (the backup jobs look for processes started by db2 and kill these before starting the backup. If you do a stopsap/startsap under spaadm, the backup job will not kill these SAP processes and the backup will fail because there are still active threads!!)

    Some handy info...

    read the latest version of note 173814 from SAPNET (service.sap.com/notes) before starting

    dont assume that the support packages can be applied to each system in the same amount of time. Often support packages are imported to SDA and SQA quickly but then take much longer to import into SPA. This is probably due to the larger volume of data in SPA if there are table conversions then there are a lot more rows in SPA then in the other environments

    keep database backups in mind!!!! Remember that offline backups are done so the system will be stopped (as of 23/10/01 this is 21:00 every night for SPA and 18:00 Sundays for SDA and SQA). You dont want to be running the application of support packages only to have the system stopped mid-way through.

    the steps which each hot package goes through when being applied are :- generate transport information file (shown on the spam screen as create cofile from datafile when this step is running) test import import request piece list import dictionary objects dictionary activation import import application-defined objects method execution (shown on the spam screen as execution of programs after import when this step is running)

    Keep in mind that the first step will be done for all support packages in the defined queue, then the second step will be done for all support packages in the queue, etc, etc. For example, if you have defined your queue to contain 5 packages, then there will be 5 generate transport information file steps (one for each package). Once the fifth one has finished, the test import step will be done for each of the 5 packages, and so on. (SPAM does not do all the steps for the first package and then move on to the subsequent ones.)

    O:\Basis R3manual.lwp 19

  • There are some steps which are queue specific (rather than package specific). The following (queue specific) steps run after the DDIC activation step (and before the main import):-

    distribution of DD objects conversion of DD objects (TBATG) move nametabs

    you can view the logs for completed steps from node 9 Tugg as well as from within spam. A very useful thing to note is that the file for the CURRENT step cannot be viewed from spam but can be from node 9. While a step is running, the log file is in /usr/sap/trans/tmp. When the step completes, this log file is moved to /usr/sap/trans/log. Viewing the files from node 9 is sometimes a quicker way to check where things are up to. Just look in /usr/sap/trans/tmp and it will show you what is currently being processed.

    NOTE : the files on node 9 are not always in English but you can still make out the general gist of things!

    the log files are stored on node 9 (/usr/sap/trans/log for the completed steps, /usr/sap/trans/tmp for the current step) using the following naming convention which will help to see how far along things are :-

    SAPxy46Bnn.where :-

    x denotes the step/s.Values for x are :-

    L generate transport information file & import request piece list

    P test importH import dictionary objectsA dictionary activationI importD import application-defined objectsR method execution

    y denotes the type of support packageValues for y are :-

    B for BASIS support packagesH for APPL support packages

    for ABA support packages

    nn denotes the number of the support package

    is the system identifier.Values for are currently :-

    SDA for developmentSQA for testingSPA for production

    O:\Basis R3manual.lwp 20

  • For example, file SAPIB46B07.SDA is the log file for...BASIS support package number 7 (since y is B and nn is 07)step import (since x is I)for system SDA (since is SDA)

    If the file was in /usr/sap/trans/log then this step has finished. If the file was in /usr/sap/trans/tmp then this is the current step.

    The naming convention for the queue specific steps are :-x.

    where:-x is the step name.

    Values for x are :-DS for step distribution of DD objectsN for step conversion of DD objects (TBATG)P for step move nametabs

    is the rundate is the system identifier

    to get SOME idea of how long it may take to apply one support package in relation to another, have a look at the sizes of the CAR files (in /usr/sap/trans/tmp). Telnet to node 9 Tugg and do the following :-

    cd /usr/sap/trans/tmprm pkgs.outls altr *KH* > pkgs.outls altr *KB* >> pkgs.outls altr *KA* >> pkgs.outvi pkgs.out (to add a blank line between each type of support package)qprt PSAP2-2 pkgs.out

    it may be an idea to increase ridsp/max_wprun_time to avoid timing out

    if things are running slowly then check that RDDNEWPP has been executed (check for the RDDIMPDP jobs in sm37). This can cause a whole heap of problems!!!!

    the dictionary activation step can run for a very long time. To get a (very) rough idea of how long the DDIC activation step for one package may run in relation to another, look in the SAPA* files (in /usr/sap/trans/tmp or /usr/sap/trans/log). Somewhere around the third page will be the number of objects (look for Anzahl zu analysierender Objekte: if looking via node 9 and for Number of objects to be analyzed: if looking via SPAM.

    Dont worry if you see DDIC Activation steps finish with a return code 8 during the application of a queue. These errors are often fixed by subsequent support packages in the queue once the DDIC Activation has been done for all packages in the queue, SPAM will rerun the step for any packages which finished with an error. After this, if you still have errors, then worry!!!

    O:\Basis R3manual.lwp 21

  • check file sizes to ensure downloads were successful. The sizes in sapnet and explorer (to view the o:) drive, show the size in KB. The sizes on node 17 and ng DOS (to view the o:) drive show the size in bytes. For example,

    sapnet size 1,402 (KB)explorer (o: drive) 1,403 (KB) (should be sapnet size + 1)

    DOS (o:drive) 1,436,213 (bytes)node 17 1,436,213 (bytes)

    the maximum size we seem able to download from sapnet is 56,808 (KB). If the file is bigger than this, we have to get it off a CD.

    sometimes the downloads from sapnet terminate before transferring the whole file. Try logging off, logging back on and not opening any other applications besides the download (to avoid memory problems).

    after applying the support packages, you may want to run sgen for performance reasons (ie to stop the compiling of new/changed objects). Choose the regeneration of existing loads option and only generate objects with invalid loads.

    O:\Basis R3manual.lwp 22

  • Applying kernel patches

    NB We are using the 4.6D kernel for the R/3 Systems (SDA, SQA and SPA)

    Refer to OSS note 318846 for background regarding installation of the 4.6D kernel in a 4.6B system.

    NB We are using the 6.10 kernel for the EBP Systems (SDP and SPP)

    download DW.CAR/DW.SAR file from sapnet to O:\Basis R3\Downloads. Call the file DW.CAR or DW.SAR

    as of May, 2002 for the 6.10 kernel patches :- service.sap.com/swcenter-main SAP WEB AS SAP WEB AS 6.10 binary patches SAP Kernel 6.10 AIX Database Independent

    ftp to /usr/sap/trans/tmp - hicspn09 (for R/3) or DARWIN (for EBP) on hicspn09/DARWIN in /usr/sap/trans/tmp, do CAR tvf DW,CAR or SAPCAR tvf DW.SAR to see what files will be affected by the patch

    move the CAR/SAR file to /usr/saptrans/tmp/patchbkups (in case a backout is required) (or ftp straight to here if you have the right authorities)

    stopsap cdexe (will take you to /usr/sap//sys/exe/run) do CAR xvf /usr/sap/trans/tmp/patchbkups/DW.CAR or

    SAPCAR xvf /usr/sap/trans/tmp/patchbkups/DW.SAR to unpack the files

    when uncaring/unsaring the files, you may get an error saying SAPCAR: Could not open for writing (error 28). Text file busy. In this case just move the file in question to .bkup and run the CAR/SAR again

    startsap check the patch level in sm51 or by running disp+work|pg

    O:\Basis R3manual.lwp 23

  • The Standalone SAP OSS PC (SAPROUTER)

    BACKGROUNDThe SAP OSS PC was first setup in late 1997/early 1998. The purpose of this PC is to access sapserv6. This then allows us to download source code corrections from SAP and allows us to access SAPs OSS system. It is also the mechanism used by SAP to dial into our R/3 systems.

    At the time the OSS PC was setup, there was no HIC firewall therefore the current configuration of routers, airgaps, ISDN lines, etc. This setup is complex with many points of failure. The main problem is that there is very little documentation on how this PC has been setup and so it is hard to get timely support when there is a problem with this PC.

    We have asked IBM/GSA to investigate reconfiguring this PC so that it goes through the standard HIC firewall. As of August, 2001 we are still waiting for a response.

    The only documentation we have for the OSS PC was written by Lynne Rixon (from the desktop area) in early 1998. It is included below :-

    SUPPLIED DOCUMENTATIONSAP OSS ConfigurationThe following are details of the configuration:

    SAPSERV6 194.39.139.16 ISDN Number at SAP end (02) 99030800 CHAP User ID at SAP end cross6 CHAP Password - HICCAN01 Router address at SAP end - 203.13.157.1 Netmask 255.255.255.0 Router address at HIC end ISDN interface - 203.13.157.124 Netmask

    255.255.255.0 Router address at HIC end LAN interface 194.117.104.54 Username for CHAP LAPSLEYD Password for CHAP HICCAN01 SAProuter address at HIC end 194.117.104.53 Hostname of SAProuter saprouter Internal address of saprouter PC 128.1.18.2

    MASK(255.255.0.0)GATEWAY(128.1.1

    O:\Basis R3manual.lwp 24

  • If SAP OSS fails to connect :-Complete the following precedures in order: ping the SAP OSS PC to check that IP is working: ping 194.117.104.53. If there

    is a reply everything is ok on the PC, complete the next step ping the HIC router address: ping 194.117.104.54. If there is a reply, complete

    the next step ping the router at SAP end: ping 203.13.157.1. If there is a reply there is a

    connection to the SAP end. If this fails the first time, try a second time. If this fails, there is a problem between HIC and SAP (could be that the ISDN line is down, something wrong in the PABX config, etc). If there is a reply there is an active connection, complet the next step

    ping sapserv6: ping 194.39.139.16. once you receive a reply from sapserv6 commence with saplogon (to access OSS)

    End of supplied doco

    O:\Basis R3manual.lwp 25

  • THE AIRGAPUnder current HIC security policy we are not permitted to have the OSS PC permanently connected to the production LAN. The airgap controls the connection between the OSS PC and the production LAN (when the airgap is closed we have connection to the production LAN). The airgap needs to be closed in the following circumstances :- when transferring files downloaded from sapserv6 to the production FINNET

    node when SAP are dialing into our systems

    At all other times the airgap should be open (ie the OSS PC is NOT connected to the production LAN).

    Physical vs Logical closing of the airgapPhysically, the airgap is the black plug underneath the OSS PC which gets inserted into the wall socket. The airgap is always PHYSICALLY closed, ie the plug must always be inserted into the wall socket. This is because the PC has been configured to use this route as the default route if the plug is not inserted into the wall socket then the PC will not be able to locate its default route and will hang.

    The connection of the OSS PC to the production LAN is therefore controlled at a LOGICAL level. The adapter/card which allows connection between the OSS PC and the production LAN is logically disabled. When connection between the OSS PC and the production LAN is required, the adapter/card must be logically enabled. This is done as follows :- right mouse click over the Network Neighbourhood icon (on the desktop) choose Properties, Bindings use the drop down arrow in field Show Bindings for: and select all adapters you will see a list showing 2 token ring adapters ([1] and [2]). The adapter we

    want to enable is [2]. Click on this entry and then on the Enable pushbutton.

    When the connection between the OSS PC and the production LAN is no longer required, make sure you disable the adapter (same as for enabling it but use theDisable pushbutton).

    USERID/PASSWORDAs of August 21,2001 the workstation userid and password for the OSS PC is

    administratorblent5p

    O:\Basis R3manual.lwp 26

  • SUPPORTIf there is a problem with the OSS PC then a call needs to be raised with the Central Help Desk. The difficult part is in knowing which part of IBM/GSA the call should be directed to seeing as the fault could be in one (or more!) of many places.

    In the past the majority of problems have been resolved by Advantra this is the area that looks after comms/network type issues. We have had problems with the PABX configuration in the past (ie the ISDN number at SAP end was removed from the PABX!)

    The last person from Advantra to fix the OSS PC was Bill Padovan (Aug, 2001). Peter Wass (from I/GSA) was the person who actually configured the PC when it had to be replaced by a new model (in mid, 2001).

    If, while trying to resolve a problem, a support person suggests rebuilding the PC make sure this is only done AS A LAST RESORT!!!! They must realise that the OSS PC is a non-standard build and they must not touch the PC until they fully understand its many interfaces.

    O:\Basis R3manual.lwp 27

  • The SAP Library (Online Help)

    Most of the information you need regarding setting up the SAP Library is in a document called Installing the SAP Library (Release 4.6B). You can find this document on the Online Documentation CDs or on SAPNET (service.sap.com/instguides).

    Some other things to note are :-

    we are using help type HtmlHelpFile (ie the help documents are in Compiled HTML format (*.CHM) and are accessed from a file server. CHM files require about one tenth of the disk space required for uncompressed HTML files.)

    the R/3 profile parameters for the online help have been set as follows :- eu/iwb/help_type=5 eu/iwb/installed_languages=E eu/iwb/path_win32= O:\sap46Bdoco\htmlhelp\helpdata(this field had been set to \\sanato03\sapteam\ sap46Bdoco\htmlhelp\helpdata as part of the upgrade. However, for some unknown reason, the SAP Library did not work using this value (ie you could open the SAP Library but no documentation was displayed when a topic was double clicked). SAP said they didnt know why the difference but suggested it could be because the way the SAP Library works has been changed in 46C and maybe there are some transition issues. Refer to OSS message (not note!) 135540 for the whole sorry saga around this problem.)

    since the upgrade to 4.6B, the SAPGUIs have been installed locally (ie the SAPGUI directory is c:\Program Files\SAPpc\sapgui)

    the program which controls the displaying of the online help is shh.exe (this lives in c:\Program Files\SAPpc\sapgui\htmlhelp). Using Windows NT Explorer, you can find out the version of shh.exe by doing a right mouse click on shh.exe, selecting properties and then version. As of 24/8/01, we are on the latest version of shh.exe which is 4.6.4.2. You may also want to look at note 156185

    O:\Basis R3manual.lwp 28

  • if users get a popup error when trying to access the SAP Library that looks like this...

    then check their version of shh.exe (chances are that they have an old version. In this case, copy the latest version from someone else. For some reason, the drag and drop copy in Explorer didnt work for shh.exe. I tried copying it from my c: drive into the o: drive. I was then planning on accessing the o: drive from the other PC (the one with the problem) to copy the lastest shh.exe into the c: drive. Therefore, to copy the file, from Explorer, right click on shh.exe and use the send to,Floppy A: option. Then take the floppy to the other PC and copy the lastest version of shh.exe from the floppy to the c: drive.)

    you may see reference (in notes, error messages, etc) to file SAPDOCCD.LOG. This file is supposed to be written to c:\WINNT. However, our desktops are locked down so that we do not have the access rights to write a file to this directory. Note 156185 says that if the SAPDOCCD.LOG cannot be written to c:\WINNT, then it is written to c:\temp to a file starting with shh. So if you ever need to look at the SAPDOCCD.LOG file and it isnt in c:\WINNT, try looking for a file starting with shh in c:\TEMP.

    the online help files (*.CHM) are stored in O:\sap46Bdoco\htmlhelp\helpdata\EN

    you may see reference (in notes, etc) to file SAPDOCCD.INI. This file allows the help settings from the R/3 parameters to be overwridden (see Chapter 4 of Installing the SAP Library. Although this file exists on most PCs c:\WINNT drive, the file contents are not for version 4.6 and are therefore ignored by SAP.

    useful notes : 156185, 97457, 115883, 94849, 95309, 94027, 101481, 174027. While these notes may not help you directly solve your problem, they may hopefully provide you with some userful background information on how the whole online help process hangs together.

    OSS note 324115 (Missing IMG contents on 46B documentation CD) was applied on 19/10/01

    O:\Basis R3manual.lwp 29

  • Number range buffering

    (7/9/01)Matt Pymont (with authorisation from Allan Taylor) has requested that number range buffering for CO documents (number range object RK_BELEG) be turned off. This is so that there are not gaps in the number ranges due to the system being bounced each night (for offline backups) thereby clearing the buffers. For CO documents, 100 document numbers are preallocated when the system is started. Any numbers not used are then lost when the system is stopped, therefore missing numbers. Refer to OSS note 62077.

    O:\Basis R3manual.lwp 30

  • Are the DB backups running?

    To check if the database backups are running :-

    telnet to the relevant node ps ef | grep db2 | pg if there are any db2med jobs in the list then this means the backups are running

    As of 10/7/02 the backups run at :- SPA daily 21:00 03:00 SDA weekly (Sun) 18:00 - 21:30 SQA weekly (Sun) 18:00 21:30 SPP daily 22:00 23:00 (note : to keep SPP and SPA in sync, SPP is

    stopped/started at the same time as SPA. ie SPP is stopped at 21:00 and started at 03:00 even though its backups only run from 22:00 23:00)

    SDP daily 22:00 23:00 (note : SDP is only being backed up daily until SPP goes live and is stabilised after that it will be backed up weekly. To keep SDA and SDP in sync, SDP is stopped on Sundays at 18:00)

    O:\Basis R3manual.lwp 31

  • Applying GUI/Frontend Patches

    Refer to OSS notes 361222 (SapPatch: Importing GUI Patches), 96885 (Downloading a frontend patch).

    Remember that (since the upgrade) our GUIs are all locally installed (on c:\program files\sappc).

    Some points re note 361222 :- the note talks about the patch tool is available as setup update

    sappatch46D_Y.exe. Once you download this, in Windows Explorer, double click on the file and then unpack to c:\program files\sappc (do NOT specify \netinst here)

    the note talks about copying the files contained in the update into directory \netinst of your installation server. The netinst directory is under c:\program files\sappc.

    follow the instructions under Patch procedure for standalone computers (not Patch procedure for installation servers).

    the localpat46D_y.exe file should be unzipped to c:\program files\sappc step 3 says on your computer, start installation program sapsetup.... To do this

    get into DOS, cd c:\program files\sappc and enter sapsetup.exe /patch

    To get the patch onto a particular PC you have to :- download from sapserv6 to the OSS PC ftp the patch from the OSS PC to node 9 Tugg (put in /usr/sap/trans/tmp) from each PC, get into DOS, cd c:\program files\sappc\netinst and do an ftp get of

    the patch from node 9 Tugg

    You can see the version of sapgui on your PC by viewing the properties of sapgui.exe (right mouse click, select properties and then the version tab). This file is in c:\program files\sappc\sapgui.

    As of 23/10/01 we still need to develop a strategy for deploying frontend patches to all users. We must talk to IBM/GSA about how best to do this in a timely and efficient manner. Also can branch offices receive the updates via Tivoli?

    As of 10/7/02, Konrad Belling in the IBM GSA SOE team has packaged the SAPGUI patches, ready for testing. Di Tuckerman from the FINNET helpdesk has been identified to do the testing of the SAPGUI patches, although she has not yet received the push.

    NB : We must ensure that the 4.6D SAPGUI patches are NOT pushed to any desktops which currently have 6.10 (for the EBP project).

    O:\Basis R3manual.lwp 32

  • TIMEOUT PROBLEMS IN SAPLKKBL

    OSS Message 184805 raised

    Message number : 0020079747 0000184805 2001 Installation : 1420023894Short text: TIMEOUTs in SAPLKKBL since applying support packagesContact Person: Ms. Brenda Stines Country : AUPhone number: (02)61246572 R/3 release: 46BCustomer name: Health Insurance Commission Database : DB2/UDB 6.Customer number: 12501 Oper. System : AIX 4.3.X

    DescriptionWe are on 46B with 46D kernel. Night before last I applied the latest support packages from 29 - 32 (KA, KB and KH). I also applied kernel patch 797 (from 620). Since these were applied we have been getting short dumps of timeouts in SAPLKKBL when running txns like FBL03.

    I tried backing out the kernel patch and also applied the latest kernel patch (825) - neither of these made any difference to the problem.

    I read note 369475 re'Frontend hangs when generating list'. This recommended applying the latest frontend patch (gui46D_472.exe) which I have done to my desktop (as we have local installations here, ie for note 361222 I followed the 'Patch procedures for standalone computers'). This did not make any difference to the problem.

    Your advice would be greatly appreciated as many users are wanting to run reports and they are getting timed out. Users ran these reports the day before the maintenance were applied with no problems.

    Many thanks,Brenda Stines6124 6572---------------------------------------------------------------------Reply 18.10.2001 19:18:45 SAP AGDear Ms. Stines,

    please use the report 'BALVBUFDEL' to reset the ALV buffer.

    If this doesnt help, we would also like to ask you to add a step-by-step description of the procedure that resulted in the error. Pleasegive us also data, which we can use to reproduce the error.

    Please give me also the information about the logon to your systemand open the connection before youll send that message back.

    Thanks for your assistance.

    Kind regardsAnett KaiserGlobal Support - Technology

    O:\Basis R3manual.lwp 33

  • --------------------------------------------------------------------- Info for SAP 18.10.2001 20:50:13 Brenda StinesHello Anett,

    Unfortunately executing report BALVBUFDEL did not help the problem.

    Here is what has happened to date :-

    1) users running reports via FBL3 with no problems

    2) support packages 29 - 32 (KA, KB andKH) were applied on Tuesday night 16/10. Also updated the kernel patch from 620 to 797.

    3) on Wednesday and Thursday (today) users have no longer been able torun their reports via FBL3 due to a timeout problem. The parms they are using are exactly the same as those used before the maintenance was applied. The screen hangs with 'List being generated' and then you eventually get the timeout error.

    4) I tried reverting the kernel patch to 620 - this did not make any difference.

    5) I tried applying the latest kernel patch (825) - this did not make any difference.

    6) I have now tried executing report balvbufdel - this did not make any difference.

    7) you can test this from txn FBL3 by using G/L account 10561 and use the open items status option with today's date.

    8) our system is going down in 30mins for offline backups (ie at 21:10). The system will be upagain at 01:45. I am not sure which country you are in and what time it is there but I will open a connection for you just incase. The details are as follows :-

    SID = SPAclient = 200userid = sapassistpassword = rem9te

    If you look in st22 you will see how many TIMEOUTs we have had over the last 2 days. (Don't worry about the system_core_dumped errors - I have applied note 440905 for this one!)

    I have opened the connection for 3 days but please remember that our system is down for offline backups from 21:10 TO 01:45.

    Many thanks,Brenda--------------------------------------------------------------------- Reply 19.10.2001 01:07:26 SAP AGPlease try the service connection at customer office hours. The system is at the moment unreachable, maybe cause a save over night.If there are still problems you can send this ticket back toXX-SER-NET-HTL.

    back to me.Best regards,

    O:\Basis R3manual.lwp 34

  • Gnther--------------------------------------------------------------------- Call to Customer 19.10.2001 12:50:37 SAP AGCustomer CallPerformed on 19.10.2001 06:49:03Contact person Ms. Brenda StinesStatus of discussion:Left message on voice mailSubject of conversation:left a voicemail. cannot logon to their system. I used logon info:SID: SPAClient: 200userID: sapassistpassword: rem9teasked her to return my call, through our SAP Australia office--------------------------------------------------------------------- Reply 19.10.2001 13:16:58 SAP AGHello Brenda,

    I cannot seem to logon to your system. Please confirm that this isthe right logon information:SID = SPAclient = 200userid = sapassistpassword = rem9te

    The connection status is that it is open for about two days and sevenhours but when I try to logon, I get the error "user is locked..." when using the above. I tried to contact you but I was not able to reach you, so I left a voicemail.

    Please send this message back to me with logon details and when it isok to logon to your system.

    Thanks and best regards,Minda FroilanGlobal Support - FinancialsSAP Asia Pacific--------------------------------------------------------------------- Info for Customer 19.10.2001 13:38:11 SAP AGCall to Customer; left voice message for Brenda to advise that thisissue is currently in customer action.-------------------------------------------------------------------------------- Info for SAP 19.10.2001 14:51:16 Brenda StinesHi Minda,

    The userid was locked due to incorrect logons. I logged on using this given password last night so it must have happened since then. I have reset the password. Here are the details :-

    SID = SPAclient = 200userid = sapassistpassword = support

    Many thanks,

    O:\Basis R3manual.lwp 35

  • Brenda--------------------------------------------------------------------- Reply 19.10.2001 16:08:13 SAP AGHello Brenda,

    I have managed to call FBL3 for GL acct 10561 and it continued toselect the line items for today's date until 188,000+ line items andjust as the list was being generated, I also got the same dump.

    Because of the large data volume, it seemed to have exceeded the maximum runtime of the program set by the profile parameter "rdisp/max_wprun_time". The current setting is 600 seconds. And the maximum runtime of a program is at least twice the value of the system profile parameter "rdisp/max_wprun_time".

    As per my research of this issue, some options were given in othercustomer messages as follows to improve the situation where there isa combination of a large volume of data in combination with arestrictive time-out parameter:

    1) You could simply set the TIME_OUT parameter to a higher value and continue to produce a line item list in online mode.2) You could produce the list in a background job.

    Please check if these options are possible in your case. Meanwhile,please be adviced that I will forward your message back to mydevelopment colleagues to assist us further on this case. They willreply to you directly for updates.

    Thanks and best regards,Minda FroilanGlobal Support - FinancialsSAP Asia Pacific--------------------------------------------------------------------- Reply 19.10.2001 18:17:57 SAP AGHello Brenda,

    I have checked the situation in your system SPA. A SQL trace shows that the database access is reasonably fast. You may verify this observing the message line in FBL3N: The number of selected line items is counting up continually up to the final number of approx. 180,000. However, this consumes a lot of time due to the very large number of selected line items.

    The main problem arises when the resulting list is rendered by theALV liest viewer tool, indicated by the message "List being generated". This takes again very long, causing the time out error. Basically, the ALV is not designed to handle such a large list.

    We should focus on the point of reducing the result volume. What isthe purpose of generating a line item list with 180,000 lines?Which operations are to be performed on this list?Which filter and search criteria are applied afterwards to find theline items of interest?

    I noticed that the given selection scenario reads postings from fourfiscal years, 1998-2001, where the main volume is located in 2000 and2001 (48,000 and 80,000 line items, resp.). Is it really necessary tolist line items from old fiscal years?

    O:\Basis R3manual.lwp 36

  • It might be a good idea to archive old line items, or to apply dynamic selections on the selection screen of FBL3N, like 'fiscal year' or 'posting date'. The resulting list should be as small as possible while still conforming to the business requirements. A list with only 10,000 line items can be rendered by your system within a few minutes.

    I appreciate your feedback.

    With best regards,Marco Werner-Kiwull (Development R/3 Financials)--------------------------------------------------------------------- Info for SAP 19.10.2001 19:47:06 Brenda StinesHello Marco,

    I agree with your points regarding trying to restrict the volume of data. We are just starting to look at the whole archiving issue. However, the fact remains that the users were able to generate these lists before the support packages and kernel patch were applied and now cannot. Are we saying that there is no fix for this situation and that the users now essentially have less functionality then before the maintenance was applied? I realize that I could increase the rdisp/max_wprun_time however this means that the system is taking longer to do something than it used to. To me this is just masking an underlying problem and I know the userswill not be happy as they already complain with the time it takes to run these reports. I am not sure about running the reports in background but the question is still why can we no longer run the reports we used to before the support packages/kernel patch were applied? Has there been changes to the ALV which are now causing these problems?

    Your assistance is much appreciated,Brenda Stines--------------------------------------------------------------------- Reply 19.10.2001 21:21:16 SAP AGHello Brenda,

    I forward this message to our ALV basis group. We will check whetherthe patches may have affected the program performance.

    I have two questions related to the situation before the problem:- How long did it approx. take to generate the list with 180,000 line items?- What was the business purpose of this list, and how did users work with it?

    With best regards,Marco Werner-Kiwull (Development R/3 Financials)--------------------------------------------------------------------- Info for SAP 20.10.2001 09:23:15 Brenda StinesHello Marco,

    I'm not sure how long it took to generate the list for 180,000 line items. This case was using the 'Open items' option. The users were actually using the 'All items' options - which would be bringing back even more lines! Because it is the weekend here I have no users to

    O:\Basis R3manual.lwp 37

  • ask about how long these took or what the business purpose of the list is. I will do this on Monday.

    As far as the timing of these jobs, you could look in st03 for days prior to when the maintenance was applied. If you look at Tuesday 16/10, dialog tasks, top time - you will see some FBL3 txns there. These did not dump with the timeout as the support packages had not yet been applied. This will give you some idea of response time.

    I will eagerly await the response from the ALV group.

    Thank you for your help.

    Kind regards,Brenda Stines--------------------------------------------------------------------- Reply 22.10.2001 17:03:00 SAP AGHello Brenda,

    thank you for the information.

    The ALV group asserts that no performance relevant changes have beenincluded in the support packages you have applied recently. Moreover,the ALV list tool is simply not designed for handling such largeoutput lists.

    It seems that you have been working close to the limit of the timeoutparameter before. If the applied support packages have affected therun time per output list line only a tiny bit, this may sum upconsiderably with 180,000 lines, and may have taken you over the limit.

    There are several opportunities to solve the problem:

    1) Reduce the list output volume by narrowing the selection criteria: This would be the best way, but it requires some analysis of the business purpose of the output, and how accountants work with it. I am looking forward to obtain more information on that.

    2) Run the report in a background job.

    3) Reduce the list output volume by removing list columns: The list output time of the ALV is roughly proportional to the number of lines times the number of columns. Removing list columns of minor interest from your layout may reduce the total runtime considerably.

    And, as a short-term workaround:

    4) Increase the profile parameter rdisp/max_wprun_time.

    With best regards,Marco Werner-Kiwull (Development R/3 Financials)

    P.S. I have adjusted the priority according to note 67739. "Very High"is reserved only for production-down situations. Thank you for yourunderstanding.--------------------------------------------------------------------- Info for SAP

    O:\Basis R3manual.lwp 38

  • 23.10.2001 09:08:07 Brenda StinesHello Marco,

    Thank you for your reply.

    Regarding your suggestions :-

    1) I am liasing with my business area with the aim of re-educating the users so that they use as much selection criteria as possible thereby reducing the size of the resultant report.

    2) I did run the report in background. It took approx 8 hours to complete (188,875 items were selected). sm50 showed that ABAP SAPLKKBL was still being called. Due to the long runtime, I don't think this is a feasible solution. I would rather educate the users in the correct use of the reports (this will also put less of a drain on system resources).

    3) I will look into the possibility of removing list columns.

    4) I would not like to increase rdisp/max_wprun_time. If the background job took 8 hours to complete, then I would have to essentially make ridsp/max_wprun_time = 0 which I don't want to do. Again, I would rather go with the user re-education.

    I have no problem with you adjusting the priority of this message.

    Is it possible for a check to be added into ABAP SAPLKKBL so that it produces an error message if more than X number of items are selected (X being the max number of items which the ALV can comfortably handle)? This would be much more user friendly and a better use of system resources then letting the system hang and then dump. This way users can be alerted ASAP that they need to use more restrictive selection parms.

    Also, do you know of any mechanisms we could put in place to ensure users do not generate these huge reports? For example, there is a field in both fbl3 and fbl5 ('Maximum number of items'). Is there a way (withoutchanging SAP code) that we can 'hardcode' a value into this field, eg a PID?

    Many thanks,Brenda Stines--------------------------------------------------------------------- Reply 23.10.2001 21:28:10 SAP AGHello Brenda,

    the ALV list tool (program SAPLKKBL) is a very generic service that is used by many applications. The performance depends on the application context and on the hardware resources. Therefore it is not feasible to have a hard-coded maximum number there in place.

    It is a better idea to apply this cutoff in the application. The field "Maximum number of items" is tied to parameter id FIT_NMAX. You may use transaction FB00, subscreen "Line items" to maintain a value per user (see note 374141). If you wish to pre-set this number for all users (mass change), you may code a report to insert the appropriate records in database table USR05.

    With best regards,

    O:\Basis R3manual.lwp 39

  • Marco Werner-Kiwull (Development R/3 Financials)

    Email sent to users 24/10/01

    To: Zac Lavarn/QLD/HIC@Prod, Karen Ashby/QLD/HIC@Prod, Mila Barcelon/NSW/HIC@Prod, Sandra Pralica/NSW/HIC@Prod, Tim Tran/QLD/HIC@Prod, Tanya Lewis/QLD/HIC@Prod, Sue Graham/QLD/HIC@Prod, Bianca Lepre/NSW/HIC@Prod, Julie Harrison/QLD/HIC@Prod, Julio Silva/QLD/HIC@Prod, Esther Balint/CO/HIC@Prod, Carolyn Kozlovskis/QLD/HIC@Prod, Nicole Coburn/QLD/HIC@Prod, Violet Opravel/NSW/HIC@Prod, Navenka Lifu/QLD/HIC@Prod, Ben Kelly/QLD/HIC@Prod, Janette Scott/QLD/HIC@Prod, William Otto/NSW/HIC@Prod, Robert Ciocca/QLD/HIC@Prod, Abha Gargya/NSW/HIC@Prod, Tracey Gill/QLD/HIC@Prod, Michael Dawes/CO/HIC@Prod, Ahalya Shakespeare/CO/HIC@Prod

    cc: Di Izzard/CO/HIC@Prod, Hai Luong/CO/HIC@Prod, Allan Taylor/CO/HIC@Prod, Tony Bombardiere/CO/HIC@Prod, Diane Tuckerman/CO/HIC@Prod, Elizabeth Moorhouse/CO/HIC@Prod

    Subject: TIMEOUT problems in production FINNET (SPA)

    (Please pass this mail on to other FINNET users in your area)

    THE PROBLEMMany users have experienced timeout problems in production FINNET recently. Most problems have occurred when using transaction FBL3 (G/L account line item display) although the problem is not restricted to this transaction.

    THE CAUSEThese timeout problems started after SAP supplied maintenance was applied to the system, that is, since last Wednesday 17/10. (The problem did not show up when testing was done in our development and QA environments as the amount of data in these systems is not large enough to trigger the problem.)

    SAP have made changes which affect the part of the system which formats and displays lists. Apparently this component (called the ABAP List Viewer - ALV) is not designed to handle large lists and that is why we are experiencing these timeout problems.

    Users I have spoken to have told me that they are running reports and not specifying any/many selection criteria (eg they are running reports for 'All items' using the default date which is 'today'). This is bringing back an incredibly large amount of data (in excess of 200,000 lines) as the system is returning all relevant items since 1998 (when FINNET went live). The transaction is then timing out because the ALV is not equipped to cope with such large amounts of data.

    SAP have asked why we need to generate such large reports - users should be using as much selection criteria (filtering) as possible to limit the amount of data read from the datatase (and therefore the amount of data displayed). Reading more data than is required puts a drain on system resources and substantially

    O:\Basis R3manual.lwp 40

  • increases response time. Also, users must then wade through a large amount of material to find the data of interest.

    Some users may be in the habit of executing a report without entering any criteria to limit the amount of data retrieved. This practice would have been fine when FINNET was relatively new, as retrieving 'everything' did not bring back a large volume of data. However, FINNET has now been live for over 3 years and there is now a large amount of data in the system. It is no longer practical to request reports without specifying as much selection criteria as possible.

    THE SOLUTION

    When running reports use as much selection criteria as possible.

    1) Always be as specific as possible with date fieldsRemember, if you do not specify a date, in most transactions this means that the system will be reading all data (from 'today' back to when FINNET went live in 1998). For FBL3 (G/Laccount line item display) you should use 'All items' and specify a date range in the 'Posting date' fields.

    2) Optionally, restrict the number of items which the system retrievesYou may also find it helpful to use the 'Maximum number of items' field (this is the last field on the FBL3/FBL5 screens).

    O:\Basis R3manual.lwp 41

  • As the name suggests, this field controls the maximum number of entries retrieved from the database (and therefore displayed). For example, if you specify 5000 in this field and your report generates more than 5000 items, you will get a warning saying 'Selection was terminated after 5000 items'. If you then click on the green tick, the report will be displayed showing only the 5000 items retrieved.

    3) Optionally, try using Dynamic Selections

    From FBL3 and FBL5, you can use the Dynamic Selections ( ) option to specify more filtering parameters. SAP suggest trying the 'fiscal year' or 'posting date' fields. The dynamic selections screen looks like this :-

    To add a new dynamic selection field, click the relevant field (eg 'Fiscal year') in the left hand pane of the screen and then on 'Adopt selected items' at the bottom of the left hand pane. You will then see that 'Fiscal year' is added to the dynamic selections in the right hand pane.

    SUMMARYWhen running reports please be as specific as possible with your selection parameters. This will mean :-

    your transaction will not timeoutyou will not have to wade through masses of data to find what you want

    O:\Basis R3manual.lwp 42

  • response time will be betterthe system will not be doing more work than necessary

    Any feedback regarding this issue would be most appreciated.

    If you have any business related queries (for example, regarding the appropriate selection parameters to use to get the report you want) please contact Di Izzard on x 46081.

    Regards,Brenda Stinesx 46572

    To: Zac Lavarn/QLD/HIC@Prod, Karen Ashby/QLD/HIC@Prod, Mila Barcelon/NSW/HIC@Prod, Sandra Pralica/NSW/HIC@Prod, Tim Tran/QLD/HIC@Prod, Tanya Lewis/QLD/HIC@Prod, Sue Graham/QLD/HIC@Prod, Bianca Lepre/NSW/HIC@Prod, Julie Harrison/QLD/HIC@Prod, Julio Silva/QLD/HIC@Prod, Esther Balint/CO/HIC@Prod, Carolyn Kozlovskis/QLD/HIC@Prod, Nicole Coburn/QLD/HIC@Prod, Violet Opravel/NSW/HIC@Prod, Navenka Lifu/QLD/HIC@Prod, Ben Kelly/QLD/HIC@Prod, Janette Scott/QLD/HIC@Prod, William Otto/NSW/HIC@Prod, Robert Ciocca/QLD/HIC@Prod, Abha Gargya/NSW/HIC@Prod, Tracey Gill/QLD/HIC@Prod, Michael Dawes/CO/HIC@Prod, Ahalya Shakespeare/CO/HIC@Prod

    cc: Di Izzard/CO/HIC@Prod, Hai Luong/CO/HIC@Prod, Allan Taylor/CO/HIC@Prod, Tony Bombardiere/CO/HIC@Prod, Diane Tuckerman/CO/HIC@Prod, Elizabeth Moorhouse/CO/HIC@Prod

    Subject: TIMEOUT Problems #2 (especially for COMPENSATION users)

    It has come to our attention that most of the users experiencing the timeout problems are in the Compensation area/s.

    Even specifying a posting period of 1/7/01 - 23/10/01 resulted in a timeout error (over 28,000 items were selected). We have been told that most of the time, a WIN number is known. If this is the case, then this number should be specified in the 'Assignment' field of the Dynamic Selections screen. Following are step by step instructions on how to do this :-

    In FBL3 :-

    1) Specify the G/L Account and select the 'All Items' pushbutton

    O:\Basis R3manual.lwp 43

  • 2) use the Dynamic Selections icon (

    The following screen (or similar) will be displayed :-

    O:\Basis R3manual.lwp 44

  • If the 'Assignment' field does not appear in the right hand pane of your screen, click on 'Assignment' in the left hand pane of your screen and then on 'Adopt selected items'. The 'Assignment' field will now appear in the right hand pane of your screen.

    3) Enter the WIN number in the 'Assignment' field and press the save icon ( )

    O:\Basis R3manual.lwp 45

  • You will then be returned to the main FBL3 screen. Notice that the Dynamic Selections icon has now changed to include the text '1 active'. This lets you know that there is 1 dynamic selection value being used for this search.

    If you specify a WIN number in the 'Assignment' field then there is no real need to specify a posting date range. However, if you know the posting date then it may be quicker to specify it (or a posting date range)

    4) Execute your report.

    If the specified WIN number exists, your report will look like this :-

    O:\Basis R3manual.lwp 46

  • NOTEAny changes made on the Dynamic Selections screen (ie adding of new dynamic selections (using the 'Adopt selected items' button) or specifying values for a particular field (eg putting 2002861034 in the 'Assignment' field) are only kept for this execution of the transaction. If you exit from FBL3 and then use FBL3 again, your dynamic selection changes from your previous time will NOT be there.

    If you have any feedback or queries, please do not hesitate to contact me.

    Kind regards,Brenda Stines46572

    Finally...

    On 25/10/01, we followed SAPs advice and made a mass change to all users (as all users have access to FBL3 and FBL5) to set FIT_NMAX to 10,000. This PID was also added to the new user template.

    O:\Basis R3manual.lwp 47

  • The Compensation Jobs

    (NOTE : Changes have been made to the way the compo jobs work as of June, 1999. The following doco is included here for background information. For information on how the compo jobs currently work, please refer to the next section Changes to Compensation Jobs.)

    1) Overview

    Files are transferred from the compensation system to the R/3 UNIX box. For each file transferred the script CompenSPA is run. This script initiates function module Z_EVENT_RAISE within R/3 which in turn raises the event COMPENSATION_INTERFACE. This event causes the initiation of job Z_INTF_COMPENSATION, which executes program ZCPNDALY. This program is responsible for creating the batch input sessions which are then released and monitored by staff in the states.

    The following is a more detailed explanation of these steps.

    2) The File Transfer

    At 22:00 (Monday - Saturday) 6 files are sent from the compensation UNIX boxes to the R/3 UNIX box (node 17) via rcp. No file transfers are done on Sundays as theR/3 system is unavailable at 22:00 due to weekly offline backups being in progress.

    The files sent from the compensation system are held in the /home/tower directory and can be accessed by logging onto node 17 (128.1.2.117) as user spaadm. The transferred files use the following naming convention :-

    cpnacc..

    The 2 digit numeric prefix indicates which compensation UNIX box the fileoriginated from. Files with a 02, 03 or 04 prefix are sent from NSW. Files with a 05, 06 or 07 prefix are sent from QLD. The prefix values are as follows

    02 - SYD103 - SYD204 - SYD305 - BNE106 - BNE207 - BNE3

    For example, file 04cpnacc.28041998.220004 originated from the SYD3 compensation box and was transmitted on 28/04/1998 at 22:00:04.

    O:\Basis R3manual.lwp 48

  • 3) Script CompenSPA

    This script resides in directory /home/tower and is initiated for each of the 6 transferred files. CompenSPA initiates the function module Z_EVENT_RAISE within the R/3 system and sends parameters which are used in Z_EVENT_RAISE and ABAP ZCPNDALY. Because of this interaction, it is imperative that the R/3 (SPA) system is running from 22:00 Monday to Saturday.

    4) Function module Z_EVENT_RAISE

    This function module can be accessed from the R/3 system using txn se37.

    The following is essentially what Z_EVENT_RAISE does :- accepts parameters specified in the script CompenSPA (most importantly the name of the file being processed) locks table ZINTFCNTRL inserts a record in table ZINTFCNTRL for the file being processed with a status of Not yet processed does some housekeeping on table ZINTFCNTRL (ie deletes records with a status of Completed which are older than 30 days) unlocks table ZINTFCNTRL raises event COMPENSATION_INTERFACE (the raising of this event will cause job Z_INTF_COMPENSATION to start)

    Note : table ZINTFCNTL can be accessed using txn se16

    5) Job Z_INTF_COMPENSATION

    This job initiates program ZCPNDALY (which is responsible for creating the batch input sessions). Z_INTF_COMPENSATION is started when the event COMPENSATION_INTERFACE is raised (by function module Z_EVENT_RAISE).

    Z_INTF_COMPENSATION can be monitored via txn sm37. For each day (Monday - Saturday) there should be 6 of these jobs run (one for each of the files transferred from the compensation system).

    An important thing to note about Z_INTF_COMPENSATION is the Print Specifications for the job. These can be viewed from sm37 - click on the job line to highlight it and then use the Steps pushbutton; click on the stepname (ZCPNDALY) and use the view pushbutton (glasses icon); from here use the Print specifications pushbutton at the bottom of the screen. The Print Immediately and Delete after print boxes should NOT be checked (ie they should be blank). This avoids the case where reports are printed automatically, are lost and then can not be reprinted.

    O:\Basis R3manual.lwp 49

  • 6) ABAP ZCPNDALYThis program is run from job Z_INTF_COMPENSATION.

    Essentially this program does the following:- accepts the parameters setup by script CompenSPA (eg name of the file being processed) locks table ZINTFCNTRL checks to see if there are any records to process (reads ZINTFCNTRL for records where the status is Not yet processed. If no records are found, the message There are no files waiting to be processed is generated in the spool request.) checks to see whether the current file has already been processed (checks ZINTFCNTRL to see if the current file is in the table with a status of Processing or Completed). Note that in this case the program is exited and no messages are generated (so the spool request will contain Empty spool request.) inserts a record into ZINTFCNTRL for the current file with a status of Processing and deletes the record for the current file with a status of Not yet processed. unlocks table ZINTFCNTRL reads the transferred file and does numerous tests including :-

    reads table T912 to ensure the correct run number is being processed. Updates T912 if the run numbers are correct. Note T912 can be viewed using txn se16. ensures that totals/sub-totals, etc reconcile (eg refund amounts, nursing amounts, etc) checks to ensure that all amounts are not 0 (ie 0 cases to process)

    NOTE : if any of the above tests fail then no batch input session is created!!!!!

    locks table ZINTFCNTRL inserts a record into ZINTFCNTRL for the current file with a status of Completed and deletes the record for the current file with a status of Processing unlocks table ZINTFCNTRL if no errors were detected, creates a batch input session based on the contents of the transferred file

    O:\Basis R3manual.lwp 50

  • 7) Some troubleshooting tips

    a) Did we receive the files from the compensation system? log onto node 17 (128.1.2.117) as spaadm cd /home/tower do an ls -alt for the date concerned. Eg. if you are checking the files sent on 30/04/1998 you would do ls -altr *30041998*. There should be 6 files - if not contact Wayne Monfries (1718) or Dennis Foden (6516).

    b) Were the files processed in R/3? log onto the R/3 SPA system use txn se16 enter table name ZINTFCNTRL and press enter in the pathname field enter **. Eg. if you are checking the files sent on 30/04/1998 you would enter *30041998* use the execute pushbutton (tick with red cherry)

    you should see 6 files if a file is missing from table ZINTFCNTRL :-

    check if the system was up around 22:00 (txn sm21). If the system was down then the CompenSPA script wouldnt have been able to initiate the Z_EVENT_RAISE function module and the subsequent jobs wouldnt have been run. In this case rerun CompenSPA. if you cant find out why the file is missing from table ZINTFCNTRL then it is pretty safe to rerun CompenSPA. To rerun CompenSPA, log onto node 17 as spaadm. Get into /home/tower and enter CompenSPA . For example, if file 05cpnacc.30041998.220003 was missing from table ZINTFCNTRL then you would enter CompenSPA 05cpnacc.30041998.220003.

    c) Was a batch input session created for each transferred file? if a file is in table ZINTFCNTRL with a status of Completed but there is no batch input session (txn sm35) then check the reports on the spool (txn sp01) to check for errors in the transferred files. Possible errors are ones to do with incorrect run numbers, 0 cases to process or amounts which did not reconcile - in these cases a batch input session is not created. If the spool report contains Empty spool request remember that this means that the file was processed on a previous day and so was not reprocessed.

    O:\Basis R3manual.lwp 51

  • Changes to compensation jobs (as of June, 1999)

    Due to changes to the compensation system, the way the compo files are made available to FINNET has changed. Instead of the files being ftpd from the compo machines to the R/3 UNIX box, the files will be transferred by doing an ftp get of the files (residing on the mainframe) from the R/3 UNIX machine. (The compensation system transfers the files from their NT boxes to the mainframe). In addition, the number of files have changed from 6 files (3 NSW and 3 QLD) to 2 files (1 NSW and 1 QLD).

    1) Overview

    Files created by the compensation system are stored on the mainframe and then retrieved by the R/3 UNIX box. This file transfer is controlled by script getcpn.sh which in turn calls script CompenSPA. The role of the CompenSPA script, the Z_EVENT_RAISE function module and the ZCPNDALY program have not changed (see the documentation above for details).

    2) The File Transfer

    The file transfer is executed by script getcpn.sh (residing in directory /home/compospa). This script is controlled via an entry in the cron table which executes the script at 10:15 (for QLD) and 10:30 (for NSW), Mondays to Saturdays. As in the old system, no file transfers will be done on Sundays as the R/3 system is unavailable due to the running of weekly offline backups.

    The transferred files reside in /home/compospa and use the following naming convention :-

    cpnacc...

    The mainframe (source) file is a GDG (file name XXXXX). For diagnostic purposes the generation number is included as part of the transferred (target) file name. The value will be either nsw or qld. For example, file cpnaccnsw.G0022V00.1990423.223000 contains NSW compo data which was transferred from the mainframe on the 23/4/99 at 2230. The file transferred was generation 22 of file XXXXX.

    Once each file has been successfully ftpd from the mainframe, the CompenSPA script will be called (by script getcpn.sh). From this point on, the procedure is the same as in the old system.

    O:\Basis R3manual.lwp 52

  • 3) User compospa

    Instead of running the compo jobs under user spaadm, a new userid (compospa) has been created specifically for compensation. This gives us an added level of security as the password for spaadm is well known whereas only a select few know the compospa password.

    All compo scripts, files, etc are now stored in the /home/compospa directory.

    4) Rerunning the compo jobs

    To rerun the compo jobs, log onto node 17 (128.1.2.117) as compospa. Execute the getcpn.sh script :-

    getcpn.sh

    where is nsw or qld. The parameter is optional - the default is generation 0 (ie the most current version of the GDG). For example, to process generation 22, enter getcpn.sh nsw g0022V00. The getcpn.sh script will download the file from the mainframe and call the CompenSPA script which will trigger the compensation jobs inside FINNET.

    5) Troubleshooting

    NOTE : the diagostic files created by script getcpn.sh are in /home/compospa/SAPftp.LOG and /home/compospa/mfxfer...

    These files will capture any problems which occurr during the ftp (eg host unreachable, authority problems with the file/user, etc).

    If a user informs you that compo batches have not appeared in FINNET, do the following :-

    5.1) Check table ZINTFCNTRLUse txn se16 to check the entries in ZINTFCNTRL for the date of transfer. For example, if you are missing batches from files downloaded on 27 April, 1999 then enter *19990427* in the PATHNAME field.

    If the entry in ZINTFCNTRL is missing then the getcpn.sh script can be rerun for the file/s in question (refer to Rerunning the compo jobs above). Missing files in ZINTFCNTRL can be caused by network problems or unavailablility of the mainframe.

    O:\Basis R3manual.lwp 53

  • If the relevant entry in ZINTFCNTRL has a status of Completed then go to step (5.2) (Check the spooler).

    5.2) Check the spooler (txn sp01)To view the reports generated by ABAP ZCPNDALY, use txn sp01 and in the third field of the Spool Request Name enter *ZCPNDALY*. Put * in the User Name field and enter the download date in the From Date field. Click once on the report you wish to view and then click on the glasses icon (or use PF6).

    Often the users just check for batch input sessions (txn sm35) and assume that the file was not processed if there is no bdc session. If the file did not contain any data to process (which happens often for the Saturday files) then the file is processed (the run number table, T912, is updated and the ZINTFCNTRL table entry has a status of Completed) but no bdc session is created - this is the correct result and the user needs to be educated in how the process works. In these cases the spool report will have

    The number of cases processed = 0

    in the Summary of the ZCPNDALY report.

    If the spool report shows a run number problem (Run No is Out of Sequence message) then go to step (5.3) (Fixing a run number problem).

    The spool report may contain the line: The spool request is empty. This implies that a file which already has a status of Completed in table ZINTFCNTRL is being reprocessed. This happens when we are reprocessing files after a run number problem has been fixed (remember that files which complete with a run number problem still have a status of C in ZINTFCNTRL). Once a run number problem is fixed any entries in ZINTFCNTRL for files which were affected by the problem need to be deleted before the files are reprocessed.

    5.3) Check for dumps (st22) If no report in produced in sp01, there may have been a problem with the format of the input data (eg fields out of alignment so that characters are trying to be written to a numeric field) or similar problems. Use txn st22 to look for dumps (user will be Z_INTF_BACK).

    O:\Basis R3manual.lwp 54

  • 5.4) Fixing a run number problem

    If the spool report contains the message Run No is Out of Sequence this means that there has been some mismatch of the expected run number in table T912 and the run number in the header of the file being processed. This can happen if the file on the mainframe has not yet been created/transferred from the compensation system. In this case the getcpn.sh script picks up the lastest version (ie generation 0) which is actually the file from yesterday and we get a run number mismatch.

    To fix a run number problem do the following :- log onto node 17 as compospa

    do an ls -altr for the date concerned. Eg. if you are checking the files ftpd on 28/04/1999 you would do ls -altr *19990428*. There should be 2 files (one for nsw and one for qld).

    take note of the generation number which is part of the file name

    find out the run number of the downloaded file by doing head -1

    log onto TSO and go to the Dataset Utility List (option 3.4)

    in the Dsname level field enter XXXX (without the quotes)

    from the results list you can see the latest generation available (ie the highest GxxxxV00 number)

    if the gen on the mainframe is greater than the gen in UNIX then the mainframe file must have been created after the getcpn.sh script was run. If this is the case, just rerun the getcpn.sh script to pick up the latest generation, reprocessing all affected files as required. You may need to specify the generation number to pick up when calling the getcpn.sh script (refer to Rerunning the compo jobs).

    if the gen on the mainframe is the same as the gen in UNIX then this implies that a file is missing (ie for some reason it was not transferred from the compensation system to the mainframe). To check this (in TSO 3.4 or at the UNIX level) compare the run numbers in the headers of the latest 2 generations. If they are not sequential then this confirms that a file is missing. If this is the

    O:\Basis R3manual.lwp 55

  • case, contact someone in the compensation area (Thao Nguyen or Henry Tabisz). Once they have got the missing file to the mainframe you can then run getcpn.sh (specifying the generation number) to process the file (refer to Rerunning the compo jobs).

    once a run number problem has been fixed, all files which ended with a Run No Out of Sequence message must be reprocessed. To do this you must first delete the corresponding entries from the ZINTFCNTRL table. You can then run getcpn.sh, specifying the relevant generation number (refer to Rerunning the compo jobs).

    O:\Basis R3manual.lwp 56

  • Handy SAPNET Aliases

    Here are some handy aliases to use in SAPNET (service.sap.com).

    Using the aliases prevents you from having to follow a long and complicated path from the SAPNET home page to find the page you want. To use an alias, just specify service.sap.com/, for example, service.sap.com/notes, in the location field from within Netscape Navigator (no need to specify http).

    If you get the error message Authorization failed. Retry? chances are that you have misspelt the alias name.

    notes (se