taxanomy kt

65
Prepared by: Rajesh Narayanan Date: Apr2007-04-23 Version: Draft 0.2 FSR Taxonomy - Understanding Project: FSR Document: KT – Understanding document Subject: Taxonomy

Upload: api-26304852

Post on 11-Apr-2015

257 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Taxanomy Kt

Prepared by: Rajesh Narayanan Date: Apr2007-04-23Version: Draft 0.2

FSR

Taxonomy - Understanding

Project: FSRDocument: KT – Understanding documentSubject: Taxonomy

Page 2: Taxanomy Kt

FSR

Version History:

Version Date By Changes

0.1 23 Apr Initial version

Contributors:

Name Role Location Remarks

Santhosh EturiAkshatha HegdeBalajiGangadharUthranarayani

Taxonomy - Understanding

Page 3: Taxanomy Kt

FSR

Contents

1 OVERVIEW:........................................................................................................................................5

2 PREREQUISITE:.................................................................................................................................5

3 DATA OBJECTS:................................................................................................................................5

3.1.1 T_TAX_TAX.............................................................................................................................5

3.1.2 V_T_TAX_TAX_ACT...............................................................................................................6

3.1.3 T_TAX_ENGINE_DASHBOARD............................................................................................6

3.1.4 T_TAX_ERROR_CATALOG....................................................................................................6

3.1.5 T_TAX_WORK.........................................................................................................................6

3.1.6 T_TAX_WORK_II....................................................................................................................7

3.1.7 T_TAX_STD_NAMING_ABBEV..............................................................................................7

3.1.8 T_TAX_NULL_CHK_STR_COMP_VAL.................................................................................7

3.1.9 T_TAX_CRS.............................................................................................................................7

3.1.10 T_TAX_TASK_WORK..............................................................................................................7

3.1.11 V_T_TAX_WORK....................................................................................................................8

4 ETL1 HIGH LEVEL ARCHITECTURE FOR FSR FRAMEWORK.........................................10

4.1.1 Control Flow..........................................................................................................................11

4.1.2 Metadata................................................................................................................................13

5 PORTAL PROCESSING FOR FSR FRAMEWORK....................................................................16

5.1.1 wf_PORTAL_CM_SUBSCRIBER..........................................................................................17

5.1.2 wf_Start_PORTAL_MAN_1...................................................................................................18

5.1.3 wf_Start_PORTAL_FLA_1....................................................................................................19

5.1.4 wf_Start_PORTAL_NFI_1.....................................................................................................19

5.1.5 s_PORTAL_PUBLISHER_SAC.............................................................................................20

6 ETL2 HIGH LEVEL ARCHITECTURE FOR FSR FRAMEWORK.........................................21

6.1.1 Control Flow..........................................................................................................................21

6.1.2 SAP RFC................................................................................................................................22

7 ENGINE LEVEL DESCRIPTION...................................................................................................24

7.1.1 Pre-Formatting Engine:.........................................................................................................24

7.1.2 Derivation Engine:.................................................................................................................25

7.1.3 Calculation Engine:...............................................................................................................26

7.1.4 Validation Engine:.................................................................................................................28

7.1.5 Remapping and Reclassification Engine:..............................................................................30

8 SCRIPTS.............................................................................................................................................31

8.1.1 gw_start_seq_tasks.sh............................................................................................................31

8.1.2 gw_start_single_task.sh.........................................................................................................32

Taxonomy - Understanding

Page 4: Taxanomy Kt

FSR

8.1.3 restart_workflow.sh...............................................................................................................32

8.1.4 start_workflow.sh...................................................................................................................32

8.1.5 move_file.sh...........................................................................................................................33

8.1.6 portal_upload.sh....................................................................................................................33

8.1.7 attach_and_send_mail.sh.......................................................................................................33

8.1.8 add_timestmp.sh.....................................................................................................................34

8.1.9 gw_push_tlr_bw.sh................................................................................................................34

9 RFC......................................................................................................................................................34

9.1.1 SAP RFC Derivations............................................................................................................35

9.1.2 SAP RFC/FSI Validation Engine...........................................................................................36

9.1.3 RULE_TEXT..........................................................................................................................37

9.1.4 SAP R/3 & BW TLR Engine...................................................................................................37

10 REFERENCE DATA FROM SAP TO ETL ENVIRONMENT (LOADING DATA FROM

SAP TABLES INTO ORACLE)..............................................................................................................39

10.1.1 Creation of Source from SAP database.................................................................................39

10.1.2 Creation of Target from SAP database..................................................................................39

10.1.3 Updation of Reference Data Mapping...................................................................................40

10.1.4 RFC – PRACTICAL SESSION...............................................................................................41

11 LOGGING MECHANISM............................................................................................................44

11.1.1 Database Level Logging........................................................................................................44

11.1.2 Script Level Logging..............................................................................................................45

12 TECHNICAL ERROR HANDLING AND TIVOLI ALERTING............................................48

12.1.1 Technical Error Handling......................................................................................................48

12.1.2 Tivoli Alerting........................................................................................................................48

Taxonomy - Understanding

Page 5: Taxanomy Kt

FSR

1 Overview:

FSR Framework uses database driven change control mechanism, wherein business users

enter the request on set of predefined tables and raise a change request via share point,

which will be communicated to ETL Team. Upon receiving the communication and approval

of change request, CR coordinator analyse the requirement and perform synch of the

requirement from work area to live area, then notify the ETL development team about the

new request. This document outlines the understanding of KT team.

2 Prerequisite:

Share point access Access to the GII_OWNER@FEED_D.WORLD schema

3 Data objects:

Data objects pertain to Taxonomy can be classified as Master and Transaction

Master

T_TAX_ENGINE_DASHBOARD T_TAX_ERROR_CATALOG T_TAX_STD_NAMING_ABBEV

Transaction

T_TAX_TAX V_T_TAX_TAX_ACT T_TAX_WORK T_TAX_WORK_II

3.1.1 T_TAX_TAX

This table contains all the versions of the rules. The removed flag signifies that the rule has

been removed and the CHANGED_CONTROL field signifies the rule when the rule has been

added.The RULE_ENGINE_REG_REF field is the one which describes the component in

which the rule has to be applied.

The ETLGRKEY tells us the order in which the rules have to be applied for deriving the

OBJECT_BUSINESS_NAME field. There could be more then one ETLGERKEY which

denotes it belong to same group.

The (field processing order,object business name and rule_engine_reg_ref) fields gives the

order in which the output field should be derived for a particular component.

Taxonomy - Understanding

Page 6: Taxanomy Kt

FSR

The ERROR_EXCEPTION field is the error code and the reference(error message) would be

provided in the T_TAX_ERROR_CATALOG table.

3.1.2 V_T_TAX_TAX_ACT

This view contains the latest version of the rule. To derive the mapping logic applied on a

field for a particular component; we need to filter the data based on the rule_engine_reg_ref

and object_business_name. Then the field should be sorted in the ascending order of the

field processing order.

Select * from v_t_tax_tax_act where rule_engine_reg_ref=’engine name’ and

object_business_name=’output field name’ order by filed_processing_order.

Eg: Select * from v_t_tax_tax_act where

rule_engine_reg_ref='B4.3.1_CORE_ALL' and object_business_name='GDS

REMAINING TERM' order by field_processing_order

3.1.3 T_TAX_ENGINE_DASHBOARD

This table contains all the engines which are referred in the rule_engine_reg_ref field of

T_TAX_TAX table. This table also specifies whether a particular engine is a derivation

engine or validation engine.

3.1.4 T_TAX_ERROR_CATALOG

This table contains all the error messages for the error_exception field in the T_TAX_TAX

table. This table contains also the details of error level (Document, Field, Logic and

Message). In addition, the region wise classification of the error is stored in this table.

3.1.5 T_TAX_WORK

This table stores all the rules which violate the integrity constraints, standard naming

conventions which can be referred from T_TAX_STD_NAMING_ABBEV and

T_TAX_ERROR_CATALOG table.

The ERROR_COMMENT field gives the exact reason why the particular rule has been

included in the T_TAX_WORK table like ‘INPUT/OUTPUT BUSINESS NAME not in standard

naming convention’ or the ‘error code mentioned by the business is not in the

T_TAX_ERROR_CATALOG table’.

Taxonomy - Understanding

Page 7: Taxanomy Kt

FSR

The standard naming conventions table would contain only the names to be followed for

input and output business names and the error codes would be validated against the

ERROR CATALOG table.

Future usage of this table to be finalised

3.1.6 T_TAX_WORK_II

This table contains all the rules which are present in the V_T_TAX_TAX_ACT (Minus) The

number of rules with removed flag of Y (Minus) number of records in the T_TAX_WORK.

i.e T_TAX_WORK_II = {V_T_TAX_TAX_ACT} – {T_TAX_TAX (REMOVED =

‘Y’)} – {T_TAX_WORK (ERROR_EXCEPTION)}

Update – Modification to the existing rule

Whenever the rule has to be modified, business users updates the modified field to Y and

then the rule would be moved from the work area (T_TAX_WORKII) to the live

area(T_TAX_TAX).

Delete – Denotes rule no longer needed

Even when the rule is removed (removed flag Y) the business updates the removed flag as Y

and the record would be moved to the live area and removed from the T_TAX_WORK_II

table.

Note: This table shouldn’t contain the removed flag as Y.

3.1.7 T_TAX_STD_NAMING_ABBEV

This table contains the naming conventions to be followed for input and output business

names,

3.1.8 T_TAX_NULL_CHK_STR_COMP_VAL

Not sure whether this table is being used or not.

3.1.9 T_TAX_CRS

Not used any longer

3.1.10 T_TAX_TASK_WORK

Not sure whether this table is being used or not.

Taxonomy - Understanding

Page 8: Taxanomy Kt

FSR

3.1.11 V_T_TAX_WORK

Not sure whether this table is being used or not.

Taxonomy Procedure (as is)

Taxonomy - Understanding

Page 9: Taxanomy Kt

FSR

Q and A

1) If the initial rule has been changed, then will a new record will be inserted into the table? If

yes, then what is purpose of the field CHANGE_TEXT?

No, this Column not used any longer

2) Will we have the same rule number if the rule has to be applied to 2 different components?

T_tax_tax may contain the same rulenumber,but the view contains only ruleno per

component. We can get query from t_tax_tax by grouping by ruleno and get the max of id.

3) What is the purpose of REF_1,REF_2,REF_3 in the table?

These fields are not used any longer

4) Why do we use the PAC_?????? Fields?

5) What is ETLGRKEY?

ETL GROUP KEY

6) Is there a chance that OUTPUT_PHYSICAL_NAME used will be different for the same

rule (Eg 2770)

Yes, if rule affect more than one field.

7) What does the ERROR_COMMENT column specify? Does this ERROR_COMMENT have

any relationship with ERROR_ID’s.

No this is independent field; it gives the reason of why field is in this table and not in the

model.

Taxonomy - Understanding

Page 10: Taxanomy Kt

FSR

4 ETL1 High Level Architecture for FSR Framework

This section portrays the architecture of the FSR framework ETL1 process in a high

level view.

Dig: ETL1 High Level Architecture Diagram for FSR Framework

Taxonomy - Understanding

GFIS Tables(ETL DB)

ETL1 FSR Framework

Reference Data Engine

Pre-processing Engine

Validation Engine

Derivation Engine

Calculation Engine

Balance Control and Reconciliation Engine

Remapping and Reclassification Engine

Error and Exception Processing Engine

Target Load Ready Engine

T_GW_STP_STARTUP (GII DB)

Target System

Page 11: Taxanomy Kt

FSR

The FSR framework process begins once the run for the individual feed is completed

and the data is available in the corresponding GFIS table in the ETL environment. The value

‘Y’ in DATA_PRESENCE_FLAG column in the T_GW_STP_STARTUP table indicates that

the data is available in the corresponding GFIS table and it is ready for further processing.

This source data from GFIS table is taken and it is directly loaded in the

T_GW_TMP_PROCESSED_DATA table in the GII environment with a STAGE_NO value ‘0’.

Every row that is populated in this T_GW_TMP_PROCESSED_DATA table will acquire a

unique RECORD_ID.

The records with STAGE_NO ‘0’ are taken for a particular PRC_PROCESS_ID and

FEED_ID and are allowed to traverse through the various engines present in the framework.

There are 9 (from the diagram above) processing engines in which each engine comprises of

many components.

The input record is propelled through each component and the respective output

record is also stored in the same T_GW_STP_STARTUP table, with the STAGE_NO

increased by 1 for every stage. In this process the PRC_PROCESS_ID, RECORD_ID and

FEED_ID remains the same for a particular record.

On completion of the first 8 (from the diagram above) processing engines the ‘Target

Load Entry’ engine, which is the final engine in FSR starts processing. This Target Load

Entry engine selects the records with the maximum STAGE_NO for a particular

PRC_PROCESS_ID and FEED_ID and makes it available for generating the target file.

The target file is then generated for the respective target systems depending on the

requisite file format.

4.1.1 Control Flow

Informatica -- We have two type of workflows in informatica

A)Run Always

B) Run once

Run always workflow runs continuously. The first session “s_WaitSession” in the workflow

internally polls startup table for the data_presence flag and triggers the processing if the flag

is set to ‘Y’ for Template, Interface and System defined in Parameter file else if the flag is set

to ‘N’ it waits for a period of time as defined in parameter file as wait_time and polls again

after the wait is over till it finds the data_presence flag is ‘Y’.

Taxonomy - Understanding

Page 12: Taxanomy Kt

FSR

Once the data_presence flag is ‘Y’ in T_GW_STP_STARTUP table the next session

“s_StartProcess” starts and it first creates process id for the given System and Interface in

the table T_GW_PRC_PROCESS. It also creates a partition in temp table for the particular

process id.

The next session Creates Execution order list and Parameter file for all the subsequent child

processes.

“s_StartTasks” is a command task which invokes all child processes (Run once) through a

script which does the actual data loading.

“s_CloseProcessingSeq” session does the post processing update.

Example-

Below is an explanation of ETL1 process for EP

The workflow “wf_Start_WF_EP” is scheduled to run continuously .Parameter file for the

workflow would be “$PM_HOME/PrmFiles/gw_apac_1/StartWF_EP.prm” which is a static

parameter file.

The first session of the workflow “s_WaitSession” polls for the value ‘Y’ in

DATA_PRESENCE_FLAG column in the T_GW_STP_STARTUP table indicates that the

data is available in the corresponding GFIS table and it is ready for further processing. This

session will use the following values for its parameter

$$SystemName=EP

$$InterfaceName=ACC

$$TemplateName=APAC_1

$$SleepTime=3

$$TimeOut=0

Internally the session invokes the procedure ‘P_WAIT_ON_FILE’ which takes System name,

Interface name and Template Name as an input and checks for the

DATA_PRESENCE_FLAG for the combination of input given to it till it finds flag value as ‘Y’

to trigger next session in the workflow.

Session ‘s_StartProcess’ makes an entry to the ‘T_GW_PRC_PROCESS ‘ table with the new

process_id to be used during entire processing of the current feed. The process_id is

derived using max (previous id+1). A partition in the temp table based on the process_id is

created. This session uses ‘F_GET_PROCESS_ID’ and ‘F_CREATE_TMP_PARTITION ‘ to

achieve the above mentioned task.

The next session ‘s_GenerateFiles’ generates dynamic Parameter File that will be used in

next shell script. File names must correspond to the names defined in Parameter file. This

task also generates an execution order list for all the tasks which will be executed for the

current system.

Taxonomy - Understanding

Page 13: Taxanomy Kt

FSR

This command task ‘s_StartTasks’ is used to execute the script ‘gw_start_seq_tasks.sh ‘ at

’$PM_HOME/Scripts/gw_core/’ which eventually triggers all the child process for the feed.

The parameter file (generated dynamically by the previous session) to be used by all the

child processed is given as an argument to the script with the full path. All Server and

Database parameters are taken from $PM_HOME/PrmFiles/gw_apac_1/Infa.txt file.

The script takes execution order list, reads it in a loop row by row and processes each task.

In order to process a task it updates the status of the task in the table

T_GW_STP_TASK_RUN for the TASK_STATUS_FLAG to ‘Y’ so that until the current

execution is over no other process can run the same task. To achieve this script internally

calls the procedure P_SET_TASK_RUN_IND which in turn checks for the current status

and update the flag accordingly. To run a particular workflow there needs to be at least one

task corresponding to that workflow with the status ‘N’.

Once there is a free task found the script starts the workflow using pmcmd command. The

status of the workflow will be monitored till the end of its processing.

After the completion of the task the run indicator in the T_GW_STP_TASK_RUN table is

reset to ‘N’ using procedure ‘P_UNSET_TASK_RUN_IND’.

Shell script also checks for workflow execution status (pmcmd getworkflowdetails) and in

case if execution status is not “Succeeded” shell script will return non-zero error code.

The last session ‘s_CloseProcessingSeq ‘ in the workflow is used for all post loading task i.e

– Prepares Source and Suspended records for erroneous records.

Copies content of the partition (in T_GW_TMP_PROCESSED_DATA) to a temporary table

and drop partition. Once the data is copied from this temporary table to the target table this

temporary table can be dropped. This calls a function ‘F_STORE_TMP_PARTITION_DATA’

for this purpose.

In case of failures in “wf_Strar WF_EP” or inside command task ‘s_StartTasks’, Tivoli worklet

will be executed and Tivoli alert will be raised. After successful completion of last session,

“wf_Start_WF_EP” will be restarted by scheduler.

4.1.2 Metadata

The metadata entry should be made in the following tables, before the ETL processing

starts. Once the entry is made in the respective tables the process kicks off manually or

automatically based on the process type.

The T_GW_STP_TEMPLATE table contains details about the location to where its

corresponding feeds are associated. An initial level entry is made in this table once feeds for

a particular location start arriving. Each and every template has a unique TEMPLATE_ID

which is generated internally by taking the maximum available TEMPLATE_ID + 1.

Taxonomy - Understanding

Page 14: Taxanomy Kt

FSR

Sample Data:

TEMPLATE_ID TEMPLATE_NAME DESCRIPTION

4 KT KT

The T_GW_STP_SYSTEM table contains the details of all the feeds corresponding to a

particular location. Each and every feed has a unique SYSTEM_ID which is generated

internally by taking the maximum available SYSTEM_ID + 1. A new entry should be made on

arrival of a new feed for any particular location.

Sample Data:

SYSTEM_ID SYSTEM_NAME DESCRIPTION

123 B.KT KT

The T_GW_STP_INTERFACE table contains the details of all the different types of interface

details. Only one entry is made for a particular interface type. If any new interface is

encountered in any of the new feeds, then a fresh entry is made in this table, by taking the

maximum INTERFACE_ID + 1.

Sample Data:

INTERFACE_ID INTERFACE_NAME DESCRIPTION

12 KT KT

These 3 tables (T_GW_STP_TEMPLATE, T_GW_STP_SYSTEM and

T_GW_STP_INTERFACE) are not dependent on each other on their own. Each table has its

own individual level of details. These tables are related in the PROCESS tables.

The T_GW_STP_COMPONENT table contains the details about all the components that are

involved in the FSR architecture(i.e. business engines). The mapping level detail for every

component is stored in this table. Each and every component has a unique

COMPONENT_ID which is generated internally by taking the maximum available

COMPONENT_ID + 1. Hence a new entry should be made if any new component is added to

the existing architectural design.

Sample Data:

COMPONENT_ID COMPONENT_NAME DESCRIPTION

75 B.KT KT

Taxonomy - Understanding

Page 15: Taxanomy Kt

FSR

The T_GW_STP_TASK table contains the details of all the available workflows. If there are

multiple copies of the same workflow that is sourced on the same component, there should

be multiple entries in this task table. Every workflow is treated as an individual task. Each

and every task has a unique TASK_ID which is generated internally by taking the maximum

available TASK_ID + 1. Hence a new entry should be made if any new workflow is created.

Sample Data:

TASK_ID TASK_NAME FOLDER_NAME DESCRIPTION

76 wf_KT KT KT

Now, the T_GW_STP_PROCESSING_SEQ table needs to be updated with the SYSTEM,

INTERFACE and TEMPLATE ID details along with the BATCH_ID_TYPE column which

indicates whether the process is manual (M) or an automated (A) one.

Sample Data:

SYSTEM_ID INTERFACE_ID TEMPLATE_ID DESCRIPTION BATCH_ID_TYPE

123 12 4 KT A

Once the details are entered in the above tables, we move on to fill in the details for

T_GW_STP_PROCESS table. Here the SYSTEM, INTERFACE, TEMPLATE and

COMPONENT ID’s are entered, through which the engines are related for a particular

process. These 4 keys together form a key element, for which a unique PROCESS_ID is

given, by taking the maximum PROCESS_ID + 1.

Sample Data:

PROCESS_ID SYSTEM_ID INTERFACE_ID TEMPLATE_ID COMPONENT_ID DESCRIPTION

62 123 12 4 75 KT

For every TASK_ID pertaining to a particular PROCESS_ID, an entry is made in the

T_GW_STP_TASK_INSTANCE table, with the order of execution of the workflows defined in

EXECUTION_ORDER column. Every entry made in this table requires a unique

TASK_INSTANCE_ID which is generated from maximum TASK_INSTANCE_ID + 1.

Taxonomy - Understanding

Page 16: Taxanomy Kt

FSR

Sample Data:

TASK_INSTANC

E_ID PROCESS_ID TASK_ID

EXECUTION_O

RDER

DESCRIPTIO

N

MANDATORY_

EXEC

77 62 76 1 B.KT N

Once the values are entered in the above mentioned tables, the V_GW_SYSTEM_SETUP

view gets updated with corresponding details. Unless and until data is entered in

T_GW_STP_TASK_INSTANCE table, the values will not be reflected in this view.

Sample Data:

Below data is generated from the details from the above tables.

EXECUTION_ORDE

R FOLDER_NAME TASK_NAME TASK_ID  

1 KT wf_KT 76  

 

COMPONENT_ID SYSTEM_ID

INTERFACE_I

D

TEMPLATE_I

D

TASK_INSTANCE_I

D

75 123 12 4 77

Once the DATA_PRESENCE_FLAG in the T_GW_STP_STARTUP table is set to ‘Y’

indicating the availability of source data, the ETL level-1 process starts based on the order of

execution present in the V_GW_SYSTEM_SETUP view.

Error Handling ----

We have set of tables T_GW_ERR_* for error handling. The error encountered is logged

separately to a system called PORTAL. Errors had been categorized as below

1 – Message

2 – Soft Error

3 – Hard Error

Based on the error category there are different ways of handling it .

Hard errors will stop processing and soft errors and messages will be posted to SAP.

5 PORTAL processing for FSR Framework

The scope of this document is to define the error handling framework for any kind of errors

which occurred while ETL1 processing.

While ETL1 processing the records which fails any kind of verification or validation test will

be uploaded to the system called PORTAL. Upload to the PORTAL is an automated process

during ETL1 processing.

Taxonomy - Understanding

Page 17: Taxanomy Kt

FSR

Following are the two types of files uploaded to portal

1) SAC_Etl_2_Portal_<2696>.err – If a Hard-Error occurs during the processing in

automated feeds, the whole data will be rejected and no Errors will transferred to PORTAL. If

no Hard error occurs the file will be transferred to SAP and the errors, and SAC will be sent

to PORTAL. SAC contains the data , which are defaulted in Automated feeds to hit the SAC-

Account in SAP. The inserted default values will than corrected manually in PORTAL-SAC

and sent again to SAP.

2) Etl2Portal_<2147>.err – This files contain only the Errors, not the data records. We will not

send any data files back to portal in PORTAL-MAN.

Once the file had been uploaded successfully to the portal, the PORTAL team will manually

correct all the record and the corrected file is available at the portal a JMS control message

is put at the message channel which will be used by the subscriber who subscribe for that

message. JMS Queues are used for this purpose. Two dedicated queues were created for

ETLPortal and PortalETL communications.

5.1.1 wf_PORTAL_CM_SUBSCRIBER

The workflow ‘wf_PORTAL_CM_SUBSCRIBER’ is a subscriber for the message at JMS

channel and it takes the message from the channel as soon as it is available at the channel

and updates the table ‘T_GW_PORTAL_CONTROL_MESSAGE’ with the values from the

control message. The workflow uses a procedure ‘F_GET_MESSAGE_ID’ for this purpose.

Parameter setting for the above workflow is in the parameter file ‘$PMRootDir\PrmFiles\

gw_apac_1\PortalCMSubscriber.prm’ with the following values

[GW_APAC_1.WF:wf_PORTAL_CM_SUBSCRIBER.ST:s_PORTAL_CM_SUBSCRIBER]

$$TemplateName=APAC_1

[GW_APAC_1.WF:wf_PORTAL_CM_SUBSCRIBER.WT:DBF_WL_TIV_WFLOWERROR.ST

:DBF_SESS_TIV_WFLOWERROR]

$$WFLOW='WF_PORTAL_CM_SUBSCRIBER'

Once the update to the ‘T_GW_PORTAL_CONTROL_MESSAGE’ table is done by the

above described workflow then based on the values in the table next workflow will be

executed.

To decide on to which workflow to run for the next we have following Request_type in the

T_GW_PORTAL_CONTROL_MESSAGE table:

1) FLA2) MAN

Taxonomy - Understanding

Page 18: Taxanomy Kt

FSR

3) NFE

5.1.2 wf_Start_PORTAL_MAN_1

Let us consider a scenario where the Request_type is MAN so one of the workflow

wf_Start_PORTAL_MAN_<1-5> will be executed based on the availability.

s_CreatePortalParamFile

This session will create parameter file for the rest of processing.

The default value for the ‘PROCESSING_STATUS’ column in the

‘T_GW_PORTAL_CONTROL_MESSAGE’ table is set ‘W’ which means the workflow looking

for this value will start processing.

The function ‘F_WAIT_ON_MESSAGE’ checks the

‘T_GW_PORTAL_CONTROL_MESSAGE’ table for the combination of

‘System_Id’ ,’Interface_Id’ and ‘Template_id’ and if the value ‘PROCESSING_STATUS’

column is ‘W’ then it creates one process id to be referred for the entire portal processing.

The value of the ‘PROCESSING_STATUS’ column is set to ‘S’ and a partition with the

process_id is created in the temp table.

Posible status for the field PROCESSING_STATUS:

W = “Wait” this messages is in the Queue and waits for a free workflow to become

processed.

S = “Started” this message is started and a workflow is running which processes this

message. (Note: If the Processing fails, the status will stay on “S”)

A = “Abroad” This Flag stops the whole processing immediately. (Can only be set manually)

C = “Completed” this will be set after the processing was finished and an according message

was sent to portal.

s_StartPortalInstance

The above command task is used to start the workflow required for PORTAL data load to

the SAP/BW. The Script is at $PM_HOME/Scripts/gw_core/gw_start_single_task.sh.

As parameter of this script has to be provided Workflow parameter name.

s_ClosePortalProcessing

This session checks the record count for the records processed based on the process id and

it also Check do we have erroneous records for the particular ProcessId. Any record in the

table T_GW_ERR_RECORD_HDR is counted as error.

This also updates the status flag in the status table with ‘C’ if the process completed

successfully or else the flag value will be ‘E’ in case of any error occurred .

Taxonomy - Understanding

Page 19: Taxanomy Kt

FSR

5.1.3 wf_Start_PORTAL_FLA_1

Let us consider a scenario where the Request_type is FLA so one of the workflow

wf_Start_PORTAL_FLA_1 will be executed based on the availability.

s_CreatePortalParamFile

This session will create parameter file for the rest of processing.

The default value for the ‘PROCESSING_STATUS’ column in the

‘T_GW_PORTAL_CONTROL_MESSAGE’ table is set ‘W’ which means the workflow looking

for this value will start processing.

The function ‘F_WAIT_ON_MESSAGE’ checks the

‘T_GW_PORTAL_CONTROL_MESSAGE’ table for the combination of

‘System_Id’ ,’Interface_Id’ and ‘Template_id’ and if the value ‘PROCESSING_STATUS’

column is ‘W’ then it creates one process id to be referred for the entire portal processing.

The value of the ‘PROCESSING_STATUS’ column is set to ‘S’ and a partition with the

process_id is created in the temp table.

s_StartPortalInstance

The above command task is used to start the workflow required for PORTAL data load to

the SAP/BW. The Script is at $PM_HOME/Scripts/gw_core/gw_start_single_task.sh.

As parameter of this script has to be provided Workflow parameter name.

s_ClosePortalProcessing

This session checks the record count for the records processed based on the process id and

it also Check do we have erroneous records for the particular ProcessId. Any record in the

table T_GW_ERR_RECORD_HDR is counted as error.

This also updates the status flag in the status table with ‘C’ if the process completed

successfully .

5.1.4 wf_Start_PORTAL_NFI_1

Let us consider a scenario where the Request_type is NFI so one of the workflow

wf_Start_PORTAL_NFI_1 will be executed based on the availability.

s_CreatePortalParamFile

This session will create parameter file for the rest of processing.

Taxonomy - Understanding

Page 20: Taxanomy Kt

FSR

The default value for the ‘PROCESSING_STATUS’ column in the

‘T_GW_PORTAL_CONTROL_MESSAGE’ table is set ‘W’ which means the workflow looking

for this value will start processing.

The function ‘F_WAIT_ON_MESSAGE’ checks the

‘T_GW_PORTAL_CONTROL_MESSAGE’ table for the combination of

‘System_Id’ ,’Interface_Id’ and ‘Template_id’ and if the value ‘PROCESSING_STATUS’

column is ‘W’ then it creates one process id to be referred for the entire portal processing.

The value of the ‘PROCESSING_STATUS’ column is set to ‘S’ and a partition with the

process_id is created in the temp table.

s_StartPortalInstance

The above command task is used to start the workflow required for PORTAL data load to

the SAP/BW. The Script is at $PM_HOME/Scripts/gw_core/gw_start_single_task.sh.

As parameter of this script has to be provided Workflow parameter name.

s_ClosePortalProcessing

This session checks the record count for the records processed based on the process id and

it also Check do we have erroneous records for the particular ProcessId. Any record in the

table T_GW_ERR_RECORD_HDR is counted as error.

This also updates the status flag in the status table with ‘C’ if the process completed

successfully or else the flag value will be ‘E’ in case of any error occurred

DBF_WL_TIV_WFLOWERROR

This workflow is used to handle any error which had occurred during current load session

followed by a mail alert sent to respective system.

5.1.5 s_PORTAL_PUBLISHER_SAC

The s_PORTAL_PUBLISHER_SAC publishes only the JMS Messages to Portal which refers

to the SAC-Error file

S_PORTAL_PUBLISHER publishes the JMS Message after a Protal run back to portal. In

case of errors, in this message will referenced to the according error-file

Taxonomy - Understanding

Page 21: Taxanomy Kt

FSR

6 ETL2 High Level Architecture for FSR Framework

ETL2 process is for loading the data from SAP to SAP_BW.

6.1.1 Control Flow

The first session “s_WaitSession” in the workflow wf_Start_WF_ETL_2 continuously

polls for the data presence flag in the startup table T_GW_STP_STARTUP.We start ETL2

processing by manually setting the data_presence_flag to ‘Y’ for system id of 99,template id

of 1 and interface id of 8. The next session “s_StartProcess” creates process id for the given

System and Interface in the table “T_GW_PRC_PROCESS”. It also creates a partition in

temp table for the particular process id.

The Execution order list and parameter file for all the subsequent child processes is

created by the next session “s_GenerateFiles”.

The command task “s_StartTasks” invokes all the child processes through a script which

does the actual data loading.

The last session “s_CloseProcessingSeq” does the post processing update.

Example:-

The following example shows the ETL2 processing.

The workflow “wf_Start_WF_ETL_2” is started manually by updating the

DATA_PRESENCE_FLAG in the ‘T_GW_STP_STARTUP’ table to ‘Y’. The first session of

the workflow ie s_WaitSession recognizes this flag and is ready for further processing.

The parameters used in this workflow are:

$$SystemName=ALL

$$InterfaceName=ETL2

$$TemplateName=APAC_1

$$SleepTime=3

$$TimeOut=0

Internally the session s_WaitSession invokes the procedure ‘P_WAIT_ON_FILE’ which

takes System name, Interface name and Template Name as an input from parameter file and

checks for the ‘DATA_PRESENCE_FLAG’ for the combination of input given to it till it finds

flag value as ‘Y’ to trigger next session in the workflow.

Session ‘s_StartProcess’ makes an entry to the ‘T_GW_PRC_PROCESS ‘ table with the

new process_id to be used during entire processing of the current feed. The process_id is

derived using max (previous id+1). A partition in the temp table based on the process_id is

created. This session uses ‘F_GET_PROCESS_ID’ and ‘F_CREATE_TMP_PARTITION ‘ to

achieve the above mentioned task.

Taxonomy - Understanding

Page 22: Taxanomy Kt

FSR

The next session ‘s_GenerateFiles’ generates dynamic Parameter File that will be used in

next shell script. File names must correspond to the names defined in Parameter file.

This task also generates an execution order list for all the tasks which will be executed for

the current system.

This command task ‘s_StartTasks’ is used to execute the script ‘gw_start_seq_tasks.sh‘

at ’$PM_HOME/Scripts/gw_core/’ which eventually triggers all the child process for the feed.

The parameter file (generated dynamically by the previous session) to be used by all the

child processed is given as an argument to the script with the full path. All Server and

Database parameters are taken from $PM_HOME/PrmFiles/gw_apac_1/Infa.txt file. The

script takes execution order list, reads it in a loop row by row and processes each task.

In order to process a task it updates the status of the task in the table

T_GW_STP_TASK_RUN for the TASK_STATUS_FLAG to ‘Y’ so that until the current

execution is over no other process can run the same task. To achieve this script internally

calls the procedure P_SET_TASK_RUN_IND which in turn checks for the current status

and update the flag accordingly. To run a particular workflow there needs to be at least one

task corresponding to that workflow with the status ‘N’. Once there is a free task found the

script starts the workflow using pmcmd command. The status of the workflow will be

monitored till the end of its processing.

After the completion of the task the run indicator in the T_GW_STP_TASK_RUN table is

reset to ‘N’ using procedure ‘P_UNSET_TASK_RUN_IND’. Shell script also checks for

workflow execution status(pmcmd getworkflowdetails) and in case if execution status is not

“Succeeded” shell script will return non-zero error code.

The last session ‘s_CloseProcessingSeq ‘ in the workflow is used for all post loading task

i.e – Prepares Source and Suspended records for erroneous records.

Copies content of the partition (in T_GW_TMP_PROCESSED_DATA) to a temporary table

and drop partition. Once the data is copied from this temporary table to the target table this

temporary table can be dropped. This calls a function ‘F_STORE_TMP_PARTITION_DATA’

for this purpose.

In case of failures in “wf_Start_WF_ETL2” or inside command task ‘s_StartTasks’, Tivoli

worklet will be executed and Tivoli alert will be raised.

6.1.2 SAP RFC

The below listed ETL2 components are discussed. The purpose of the mappings is to

download the RDM data from the SAP Database and load the same to GFIS tables in ETL

environment. These updated GFIS tables are used as corrected reference data in ETL2

processing. This can be done in two procedures.

Taxonomy - Understanding

Page 23: Taxanomy Kt

FSR

1st Procedure: - this procedure is NOT used in PROD

m_LOAD_RDM_DELTA_MRG

This mapping is used to load the data from the feed specific feed_id’s (240,241,242,243 and

239) to the constant feed_id (10021) into GFIS REFERENCE TABLES. So this mapping

merges several feed ids into one feed id in same GFIS tables.

m_LOAD_RDM_DELTA_RFC

In this mapping, we call an RFC function which creates the RFC delta files in the SAP server.

The generated delta files will be downloaded to ETL environment for further processing.

Delta generation RFC works in two modes – taking changes (deltas) generated since

previous call (in case of two consecutive RFC calls first call will return some data, but second

call will return no results) or taking all changes for given time period. Parameters used for

RFC call can be set in T_GW_STP_TASK_PARAMETERS table (which tables should be

processed and which mode is used – take only latest changes or changes for given period of

time).

m_LOAD_RDM_DELTA_UPD

The above delta files are used as source and loaded the data into GFIS REFERENCE

tables. Before loading the data we check for the data and create flag’s (Modified record or

new record manually inserted by operator in RDM). So we are updating GFIS tables with

changes received via delta files from SAP RDM. Received files have same structure as

original tables in SAP.

2nd Procedure: - used in PROD

m_LOAD_RDM_FULL

This mapping loads the data into GFIS REFERENCE tables with feed_id 10021 from the

SAP RDM tables. So we are taking exact copy from SAP RDM and load this data into GFIS

tables.

SAP RDM Source tables:

YYFGF_GLV4_GCP (Counterparty)

YHF_GCO_SUMMIT (Summit contracts extension table)

YYFGF_GLV4_GCO (Contracts table)

YYFGF_GLV4_GIR (Instruments)

YYFGF_GLV4_GIC (Instruments Cross reference)

Taxonomy - Understanding

Page 24: Taxanomy Kt

FSR

Please refer below for the GFIS REFERENCE tables which are referred above.

DBF_GFIS13_T_COUNTERPART_REF

DBF_GFIS13_T_CONTRACT_REF

DBF_GFIS13_T_INS_REF

DBF_GFIS13_T_INS_CROSS_REF

T_CON_ASIA_DATAWAREHOUSE

Mapping between SAP and GFIS fields is provided in LOAD_ETL2_DATA.doc document.

Note that IDMS and SUMMIT feeds use different fields for saving logically same CON data in

T_CON_ASIA_DATAWAREHOUSE table. These fields are remapped to unified structure.

7 Engine Level Description

7.1.1 Pre-Formatting Engine:

In this engine, ETL loads the data from the source to the T_GW_TMP_PROCESSED_DATA table with

stage_no=0. Please refer below for the mappings.

m_LOAD_GFIS_SOURCE_DATA_2_STAGE_0

Loads the source data from GFIS DDM table (DBF_GFIS13_T_DD_ACC_MOVE) to T_

GW_TMP_PROCESSED_DATA table with stage_no=0

m_B4.1.1_CORE_SL

Loads the source data from SAP table to T_GW_TMP_PROCESSED_DATA table with stage_no=0

m_B2.1.3_APAC_OFFBL

Loads the source data from SAP and GFIS CON table (DBF_GFIS13_T_CONTRACT_REF) to

T_GW_TMP_PROCESSED_DATA table with stage_no=0

m_B4.1.1_CORE_FLA

m_B4.1.1_CORE_NFE

m_B4.1.1_CORE_MAN

The above three mappings load the data from PORTAL to T_GW_TMP_PROCESSED_DATA table

with stage_no=0.

7.1.2 Derivation Engine:

Taxonomy - Understanding

Page 25: Taxanomy Kt

FSR

In this engine, ETL derives the business logic as per the rule mentioned in the Taxonomy. Please refer

below for the list of mappings in Derivation Engine.

Shortcut_to_m_B4.2.1.1_CORE_ALLShortcut_to_m_B4.2.1.2_CORE_ALLShortcut_to_m_B4.2.2.2_APAC_ALLShortcut_to_m_B4.3.1_CORE_ALLShortcut_to_m_B4.3.2_APAC_ALLShortcut_to_m_B4.4.1_APAC_ALLShortcut_to_m_B4.4.1_APAC_MANShortcut_to_m_B4.5.1_APAC_ALLShortcut_to_m_B4.5.1_APAC_MANShortcut_to_m_B4.5.2_APAC_ALLShortcut_to_m_B4.5.3_APAC_ALLShortcut_to_m_B4.5.4_APAC_DW_ACC

Below we have explained the flow of one of the Derivation Engine mapping.

Mapping: B4.3.1_CORE_ALL

Please query the database to get the set of rules to be implemented in the component

“B4.3.1_CORE_ALL’. In this component the GDS attributes are derived as per the Taxonomy rules.

select * from V_T_TAX_TAX_ACT where RULE_ENGINE_REG_REF= ‘B4.3.1_CORE_ALL’

T_GW_TMP_PROCESSED_DATA: This is the source in the mapping. We will retrieve the data from

this table with maximum stage_no.

Select * from T_GW_TMP_PROCESSED_DATA where PRC_PROCESS_ID=$$process_id and

stage_no = (Select max(stage_no) from T_GW_TMP_PROCESSED_DATA where

PRC_PROCESS_ID=$$process_id)

mplt_GENERATE_PROCESSING_INFO: This mapplet generates the stage_no of a component in a

process flow. It also loads the processing time etc.

mplt_LINKAGE_ATTRIBUTES_GDS: Most of the logic has been implemented in this mapplet.

Target tables loaded:

T_GW_TMP_PROCESSED_DATA: The derived data is stored in this table by incrementing the

stage_no.

7.1.3 Calculation Engine:

Taxonomy - Understanding

Page 26: Taxanomy Kt

FSR

In this engine, ETL compare the debit amount and credit amount based on the group by conditions

mentioned in the taxonomy rules and raise the error message if there is a mismatch. Totally there are

three mappings in Calculation Engine. Please refer below for the mappings.

m_B6.1_CORE_ALL

m_B6.1_CORE_MAN

m_B6.2_CORE_ALL

We are explaining one of the Calculation Engine mapping in this document. The other two mappings

also follow the same flow. Please confirm.

Mapping: m_B6.1_CORE_ALL

Please query the database to get the set of rules to be implemented in the component

“m_B6.1_CORE_ALL

select * from V_T_TAX_TAX_ACT where RULE_ENGINE_REG_REF=' m_B6.1_CORE_ALL

T_GW_TMP_PROCESSED_DATA: This is the source in the mapping. We will retrieve the data from

this table with maximum stage_no.

Select * from T_GW_TMP_PROCESSED_DATA where PRC_PROCESS_ID=$$process_id and

stage_no = (Select max(stage_no) from T_GW_TMP_PROCESSED_DATA where

PRC_PROCESS_ID=$$process_id)

exp_CHECK_RESULTS: Check dr amount = cr amount by Company Code, Currency, GAAP

Indicator, effective date, and where on/off balance sheet indicator <> 3. If dr amount <> cr amount,

then reject feed. NOTE: Check both Base Currency Equivalent and Transaction Amounts. And check

dr amount = cr amount by Company Code, Currency, GAAP Indicator, effective date, and where on/off

balance sheet indicator = 3. If dr amount <> cr amount, then reject feed. NOTE: Check both Base

Currency Equivalent and Transaction Amounts.

mplt_GENERATE_PROCESSING_INFO: This mapplet generates the stage_no of a component in a

process flow. It also loads the processing time etc.

un_COLLECT_WIDE_ERRORS: All the fields information will be combined in this union

transformation. Each group in this transformation gives the details of a field.

Target tables loaded:

T_GW_ERR_RECORD_HDR: In this table the record_id which is having error will be stored. It also

stores the PRC_PROCESS_ID and processing time (ITS)

Taxonomy - Understanding

Page 27: Taxanomy Kt

FSR

T_GW_ERR_RECORD_ASN: In this table the message_id’s of a record_id will be stored.

(Message_id refers to the field of a record). It also stores the ERROR_ID, COMPONENT_ID and ITS.

T_GW_ERR_MSG_PARAMETERS: In this table the parameter_name and parameter value of a

message_id will be stored.

T_GW_PRC_STAGE: The stage_no of this component will be stored.

Note: Record_id stored in the target is the first record_id of the whole group for a condition.

Questions:

1. Why is the batch_no been used in the groupby clause when it’s not given in the taxonomy

rule. What data will be stored in the batch_no column?

Joerg has confirmed that the batch_no column data will be same for each process.

2. In m_B6.1_CORE_ALL, the condition “epozflg <> 'R' or epozflg is null” is mentioned in the

source qualifier which is not given in taxonomy rule. Please check and confirm.

Answers from Matthias Beck:#

1. Your understanding of the calculation engines is correct.

2. The btchno is only relevant for Portal processing. I think there should be a prc_process_id

instead of btchno, since more than one process id can be processed with one btchno. But

anyway in the where clause the process id with the maximum stage no is already filtering the

right data. So the bchtno is not really necessary in this case from my point of view. To have a

100% clearness, please contact Bojan to confirm that.

3. The epozflg was build in since the requirement came up to process the PORTAL MAN

different when the origination is a SAC message from automated feeds. If this is the case the

balance check should not take place. This was added afterwards and is a special requirement

which is not mentioned in the taxonomy, but came up during testing.

Jörg Wunderlich – Question 2:

T_GW_TMP_PROCESSED_DATA:

In the Validation engines which are used in Portal workflow, is the SQL-Overwrite (especially

the where-clause) different from the other Engines. Here is because of the special handling

of the Reversal Rows in case of SAC-revalidation a additional clause added.

Taxonomy - Understanding

Page 28: Taxanomy Kt

FSR

Select * from T_GW_TMP_PROCESSED_DATA where PRC_PROCESS_ID=$$process_id and

stage_no = (Select max(stage_no) from T_GW_TMP_PROCESSED_DATA where

PRC_PROCESS_ID=$$process_id)

AND

(T_GW_TMP_PROCESSED_DATA.EPOZFLG <> 'R' OR T_GW_TMP_PROCESSED_DATA.EPOZFLG

is null)

This construct is needed, because if a SAC will be sent to portal it will automatically create

two rows. First row is the corrected row and second row is the row with the defaulted values

for. The second record hits the SAC in SAP and is the contra entry to the original entry

coming from automated feeds. This row is marked with a ‘R’ in attribute EPOZFLG an can be

filtered using this attribute.

This filter should only be in Validation engines. This construct is not longer needed in

calculation engines, because here are all rows needed for Balance check. So the B6… Portal

engines should not use it. I think at the moment the filter is used only in one B6… -engine.

But this engine is not used in Portal workflow so the filter will not affect anything. For the next

release we will remove this filter from all B6…- engines.

7.1.4 Validation Engine:

Mapping: B3.1.1_CORE_MAN

Please query the database to get the set of rules to be implemented in the component

“B3.1.1_CORE_MAN’

select * from V_T_TAX_TAX_ACT where RULE_ENGINE_REG_REF='B3.1.1_CORE_MAN'

This component is a part of Validation Engine in which we perform mandatory for the below set of

columns.

POSTING DATE

DOCUMENT DATE

POSTING KEY

LINE ITEM TEXT

COMPANY CODE

MANAGEMENT AREA

TRANSACTION CURRENCY CODE

COUNTER

TRANSACTION CURRENCY AMOUNT

T_GW_TMP_PROCESSED_DATA: This is the source in the mapping. We will retrieve the data from

this table with maximum stage_no.

Taxonomy - Understanding

Page 29: Taxanomy Kt

FSR

Select * from T_GW_TMP_PROCESSED_DATA where PRC_PROCESS_ID=$$process_id and

stage_no = (Select max(stage_no) from T_GW_TMP_PROCESSED_DATA where

PRC_PROCESS_ID=$$process_id)

$$process_id is an auto generated ID during the processing.

exp_DERIVE_CB_ATTRIB: In this expression the error_code, msg_param and msg_value are

derived.

Ex: ERROR_CODE: IIF(ISNULL(EFFECTIVE_DATE),'201',NULL)

MSG_PARAM11: '$fieldname'

MSG_VALUE11: 'Posting Date'

mplt_GENERATE_PROCESSING_INFO: This mapplet generates the stage_no of a component in a

process flow. It also loads the processing time etc.

un_COLLECT_WIDE_ERRORS: All the fields information will be combined in this union

transformation. Each group in this transformation gives the details of a field.

mplt_ERROR_HANDLING: In this mapplet we will implement the logic (Please refer Appendix for the

logic) and route the data to the Error tables. The inputs are ERROR_CODE, PRC_PROCESS_ID,

STAGE_NO, COMPONENT_ID, RECORD_ID and all the MSG_PARAM’s and MSG_VALUE’s

Target tables loaded:

T_GW_ERR_RECORD_HDR: In this table the record_id which is having error will be stored. It also

stores the PRC_PROCESS_ID and processing time (ITS)

T_GW_ERR_RECORD_ASN: In this table the message_id’s of a record_id will be stored.

(Message_id refers to the field of a record). It also stores the ERROR_ID, COMPONENT_ID and ITS.

T_GW_ERR_MSG_PARAMETERS: In this table the parameter_name and parameter value of a

message_id will be stored.

T_GW_PRC_STAGE: The stage_no of this component will be stored.

NOTE: Every engine will store a new stage_no in this table but Validation engines will never write a

new stage in the T_GW_TMP_PROCESSED_DATA table.

APPENDIX:

The inputs to the mapplet “mplt_ERROR_HANDLING” are ERROR_CODE, PRC_PROCESS_ID,

STAGE_NO, COMPONENT_ID, RECORD_ID and all the MSG_PARAM’s and MSG_VALUE’s.

Intially we will filter the null error_code records. And the get the error_id for the

T_GW_ERR_CATALOG table by passing ERROR_CODE.

Taxonomy - Understanding

Page 30: Taxanomy Kt

FSR

A function “PKG_GATEWAY.F_GET_MESSAGE_ID” is used to generate the message_id which will

be loaded into T_GW_ERR_RECORD_ASN table for all the error fields of a record_id.

Parameter_name and parameter_value of a error field (message_id) will be loaded to

T_GW_ERR_MSG_PARAMETERS.

Comment:

T_GW_TMP_PROCESSED_DATA:

In the Validation engines which are used in Portal workflow, is the SQL-Overwrite (especially

the where-clause) different from the other Engines. Here is because of the special handling

of the Reversal Rows in case of SAC-revalidation a additional clause added.

Select * from T_GW_TMP_PROCESSED_DATA where PRC_PROCESS_ID=$$process_id and

stage_no = (Select max(stage_no) from T_GW_TMP_PROCESSED_DATA where

PRC_PROCESS_ID=$$process_id)

AND

(T_GW_TMP_PROCESSED_DATA.EPOZFLG <> 'R' OR T_GW_TMP_PROCESSED_DATA.EPOZFLG

is null)

This construct is needed, because if a SAC will be sent to portal it will automatically create

two rows. First row is the corrected row and second row is the row with the defaulted values

for. The second record hits the SAC in SAP and is the contra entry to the original entry

coming from automated feeds. This row is marked with a ‘R’ in attribute EPOZFLG an can be

filtered using this attribute.

7.1.5 Remapping and Reclassification Engine:

In this engine, ETL checks the threshold limit and default the SAC error columns if the threshold limit

value is less than the specified value for that specific system.

Below are the mappings for Remapping and Reclassification Engine.

m_B8.6.4.1.ALL

In this mapping, the threshold limit check is implemented. We compare the error records count of the

process with the threshold count mentioned in the dbnexus table

(MAP_THRESHOLD_PARAMETERS). If the check fails then the error details will be logged into the

error tables.

Bojan: Threshold can be defined as percentage too. For details please see TDD (Z:\80-Feeds\

GII_Gateway\Global\TDD\General\dbFeeds Technical Design v1.5.doc) Chapter: 14.4 Error

Threshold.

Taxonomy - Understanding

Page 31: Taxanomy Kt

FSR

m_B8.2.4.1.ALL

In this mapping, the default values will be populated to SAC error columns. Initially during the

validation, the error record default values are inserted into T_GW_ERR_RECORD_DEFAULT. In this

engine, ETL look up into T_GW_ERR_RECORD_DEFAULT table and default value to those error

record_id’s before posting into SAP. In this process, the original source record and suspended error

record will be inserted into T_GW_ERR_RECORD_DTL with RECORD_TYPE_ID=1 and

RECORD_TYPE_ID=2 respectively.

Bojan: This is ok, but for further details please see TDD (Z:\80-Feeds\GII_Gateway\Global\TDD\

General\dbFeeds Technical Design v1.5.doc) Chapter: 14.3 SAC Processing

NOTE: The look up’s taken on dbnexus table (DBF_MAP_T_MAPPING) has to be modified.

In mapping m_B8.6.4.1.ALL, look up LKP_MAP_THRESHOLD_PARAMETERS_DB_NEXUS the

VALIDTO_OVERRIDE and VALIDFROM_OVERRIDE should be compared with the effective date in

the condition tab.

Bojan: As I know this change is already part of the Remediation.

8 Scripts

8.1.1 gw_start_seq_tasks.sh

This script is used to start the processing sequence as per the execution order list.

It takes a parameter file as an input which contains the details about the parameters that are

invoked in the session and workflow level.

The execution order list will be cataloged in a separate file, named as

ExecutionOrderList_<feed_name>.txt

The server and database information is taken from Infa.txt that is placed in the PrmFiles

directory.

The P_SET_TASK_RUN_IND procedure from the PKG_GATEWAY package is called, which

selects the next available task to be run. This sets the TASK_STATE_FLAG of the

T_GW_STP_TASK_RUN table to ‘Y’, indicating that the particular task is currently running.

Once the execution of the task is completed, the P_UNSET_TASK_RUN_IND procedure

from the PKG_GATEWAY package is called, which updates the completed tasks as ‘N’ in the

TASK_STATE_FLAG of the T_GW_STP_TASK_RUN table. This indicates that the execution

of the particular task is completed and is free for allocation of a new run.

Taxonomy - Understanding

Page 32: Taxanomy Kt

FSR

The run status and the errors encountered are logged in every stage in a log file.

8.1.2 gw_start_single_task.sh

This script is used to run a single workflow.

It takes 3 input parameters

1. Directory name, where the workflow resides2. Workflow3. Parameter file name along with the directory path

The script gives the status of the workflow (running or stopped or execution completed)

8.1.3 restart_workflow.sh

This script is used to restart a workflow. This script is used in places where a re-run should

be done.

It takes 6 input parameters.

1. Folder name2. Workflow name3. System ID4. Interface ID5. Template ID6. PrcProcess ID

This script invokes gw_start_single_task.sh for a re-run of the same workflow, using the

existing parameters.

8.1.4 start_workflow.sh

This script is used to start the execution of workflows.

The input parameters for this script are

1. Folder name2. Time to keep the workflow in sleep state before the start of the workflow execution (in

secs)3. File with the workflow names to be executed.

It gets the server and port details from the Infa.txt file

Initially the wait period specified is taken into account and the workflow execution is kept on

hold till the specified period.

Taxonomy - Understanding

Page 33: Taxanomy Kt

FSR

Once the sleep time is over, the execution of each and ever workflow specified in the file

starts.

Irrespective of the status of the previous workflow, all the workflow names specified in the file

is started.

8.1.5 move_file.sh

This script is used to move the files to a specified target directory.

It takes 2 input parameters

1. Parameter file name containing details about the files to be moved.2. Target directory to where the files are intended to move.

The script takes the remote server and remote file details from the parameter file, and moves

them to the specified target directory location.

8.1.6 portal_upload.sh

This script is used to upload the files generated to the specified destination directory, through

secure file transfer protocol.

It takes 3 input parameters

1. Parameter file name along with its path2. Destination directory3. Source directory

The script gets the server and file details from the parameter file. Then it connects to the

destination directory through sftp, and copies the source files to the target location defined.

Once the files are copied, the file permissions are changed to 666 in the destination

directory.

8.1.7 attach_and_send_mail.sh

This script is used to send a file through email, to a list of user id’s defined in a group.

It takes 3 input parameters

1. Parameter file name along with its path2. Filename that has to be attached in the mail3. Mail subject

Taxonomy - Understanding

Page 34: Taxanomy Kt

FSR

8.1.8 add_timestmp.sh

This script is used to append timestamp (system date) to the filename.

It takes 2 input parameters

1. File name to which the timestamp needs to be attached2. Folder name in which the file resides

The timestamp that is appended is of YYYYMMDD_HHMMSS format.

8.1.9 gw_push_tlr_bw.sh

This script is used to copy the TLR BW target files to the target (SAP) server. It also creates

a parameter file that contains a list of files that has been pushed to the SAP server.

It takes 4 input parameters

1. Parameter file that contains the SAP server details, along with the directory and file names that need to be pushed.

2. Input data file name, which needs to be renamed and pushed to the SAP server.3. Input control file name, which needs to be renamed and pushed to the SAP server.4. Output file name

9 RFC

The RFC functions are implemented in the SAP environment. These RFC functions are

called by the ETL Layer during various phases, by passing the required input parameters.

The usage of RFC in the Derivation, Validation and Target Load Ready (TLR) phases is

explained here.

RFC input/output parameters and how they are used in ETL is described in separate TDD

documents.

Connection string for RFC is set in $$SAPCONNECTION mappings’ parameter,

FSR_SAP_R3 should be used for all RFC.

9.1.1 SAP RFC Derivations

The SAP RFC derivations are included in following engines.

Taxonomy - Understanding

Page 35: Taxanomy Kt

FSR

1. B4.5.2_APAC_ALL (Used for Regional Off-Balance Sheet Derivation)2. B4.5.4_APAC_DW_ACC3. m_B4.7.1_APAC_ALL, m_B4.7.1_APAC_ALL_DAILY

Following RFC functions are used for derivations:

Mapping External object Description

m_B4.7.1_APAC_ALL,

m_B4.7.1_APAC_ALL_DAILY Y_YFF_GLV4_BWPKEY_BALANCE

Balance as on date for the given BW

Primary Key

m_B4.5.2_APAC_ALL Y_YFF_GLV4_BW_PWKEY_GEN_MULT Generate BW primary key

m_B4.5.4_APAC_DW_ACC Y_YFF_GLV4_GET_WORKDAY Get next working day via RFC

BWPKEY_BALANCE and BW_PWKEY_GEN_MULT functions contain table-type parameters and are

intended for processing multiple rows, so multiple-stream mappings are used. Intermediate flat files

are used for passing parameters between streams, indirect flat file source is used for loading scalar

and table data for second (execution) stream.

GET_WORKDAY contains only scalar parameters and is called only once, so single-stream mapping

is used.

The input parameters specified for a particular RFC function are sent to the corresponding

RFC function call. The output from RFC is send back from SAP to our server.

For example the Y_YFF_GLV4_GET_WORKDAY function is used to calculate the next

business day.

The input parameters that are being passed to this function are:

Input to RFC

Corresponding data from

T_GW_TMP_PROCESSED_DATA

table

I_DATE PSTNGDTE

I_DIRECTION Null

I_DAYS Null

I_CALENDAR1 'PH'

The output parameter derived from RFC function is listed below:

Output from RFCReturn value to

T_GW_TMP_PROCESSED_DATA

Taxonomy - Understanding

Page 36: Taxanomy Kt

FSR

table

E_DATE YYYYMMDD format

9.1.2 SAP RFC/FSI Validation Engine

The SAP RFC validation engine covers the following components.

1. B3.4.1_APAC_ALL2. B3.6.4.1.1

The SAP RFC and FSI validations are done in the SAP environment, through the RFC calls

from ETL layer. ETL always sends all available fields, but actual configuration of what and

how is checked is performed on SAP side. If any errors are detected in the data present in

SAP, then the error records are sent back. Input records and error records from SAP are

linked by unique ETL’s record_id (from T_GW_TMP_PROCESSED_DATA). Else no

message is returned from the SAP through this RFC, indicating that the data available in

SAP is correct. Each input field from ETL may generate zero (no errors) or several (one for

each error) return records from SAP.

Following RFC functions are used for derivations:

Mapping External object Description

m_B3.4.1_APAC_ALL Y_YXETLIN_VALIDATE_FSI FSI Validation (call BW mapping service)

m_B3.6.4.1.1 Y_YXETLIN_VALIDATE_DATA Validate data coming from ETL

The details about the error are recorded in the error tables by standard reusable mapplet

mplt_ERROR_HANDLING.

1. T_GW_ERR_RECORD_DEFAULT2. T_GW_ERR_RECORD_HDR3. T_GW_ERR_RECORD_ASN4. T_GW_ERR_MSG_PARAMETERS

Query:

Clarification is required for the following rule. (RULE_NO - 3173)

9.1.3 RULE_TEXT

These validations would be handled through an RFC Process.

FSI value will be derived based on GCoA and its possible GDS combinations.

Taxonomy - Understanding

Page 37: Taxanomy Kt

FSR

Once the GDS values are derived ETL need to check against this C3REDA table and ensure

that a valid FSI value is possible to be derived.

1. What is the GCoA that is being referred to?2. How do we get all possible GDS combinations?

1. GCOA is a Global Chart of Accounts. GDS_GCOA field in RFC parameters is not set by ETL (i.e. sent as null), but derived internally in SAP using Company code (COCD) and Local GL Account ID

2. ETL only sends GDS values from current record (derived by previous engines in processing sequence) to RFC. FSI derivations and C3REDA checks are performed by SAP, based on GDS values provided by ETL.

I.e. GDS values for specific records are derived on previous processing stages (GDS

derivation engines) and sent to FSI RFC. Checks described in rule 3173 are performed

internally in SAP (check against C3REDA table; ensure that FSI values can be derived).

9.1.4 SAP R/3 & BW TLR Engine

The SAP R/3 and BW TLR engine covers the following components.

1. B9.2_APAC_ALL_UPLOAD2. B9.4_APAC_ALL_UPLOAD3. B9.6_APAC_FLA_UPLOAD4. B9.7_APAC_NFI_UPLOAD5. B9.8_APAC_ALL_UPLOAD6. B9.9_APAC_ALL_UPLOAD

Query: Can you please confirm whether we are using all the above mappings in the

processing?

Now we are using only B9.2 (for SAP BW) and B9.4 (for SAP R/3) upload mappings.

B9.6-B9.9 upload mappings are the copies of B9.2, only difference is in component name in

mappings’ parameters. These unique component names were intended for better readability

of processing information. But as we always have TLR creation mappings (like B9.6, B9.7)

prior to upload mapping and it’s listed in processing information, it’s possible to use only one

generic B9.2_upload mapping.

Output files are created by the TLR engines. These target files are renamed with a unique

name (by adding timestamp) and uploaded to the SAP server using shell scripts:

gw_push_tlr_sl.sh – for SAP SL (R/3) files, B9.4 mappings

gw_push_tlr_bw.sh – for SAP BW files, all other B9.* mappings

Taxonomy - Understanding

Page 38: Taxanomy Kt

FSR

These shell-scripts are configured as post-session commands in TLR creation engines. Files

are uploaded to SAP servers using Unix sftp utility. Shell-scripts and sftp output is saved in

scriptlogs/gw_apac_1 directory.

Only data files are required for SAP R/3, but SAP BW requires uploading of paired control

and data files. Control file contains information about file name for data file, number of

records and some additional service information.

After uploading of TLR files to SAP server, notification about new TLR files is send to the

SAP server by the RFC function that is being called in the TLR UPLOAD Components.

Notifications for SAP BW are also sent to SAP R/3 and forwarded internally between two

SAP systems (as we don’t have Informatica license for connecting directly to SAP BW).

TLR submission status (SAP return code and text message) is returned by RFC and saved in

T_GW_PRC_FILES table.

The process status is updated in the T_GW_PRC_FILES and T_GW_PRC_STAGE tables.

Queries:

1. How will the errors encountered during transferring of the files through SFTP command notified? Are we raising a Tivoli alert If the SFTP transfer is not successful?No, we are not raising alerts if sftp is not successful. TLR submission status (also “file

not found” errors) is returned from SAP by TLR notification RFC. This status is saved

in T_GW_PRC_FILES table and in case if return code is not 0, error message will be

sent to Portal.

2. How do we know which RFC function is to be called at a specified point? Has it been specified anywhere.RFC notification mapping should be called exactly after TLR creation mapping. Which

RFC should be called for which file types – it’s mentioned in our “SAP_R3_TLR.doc”

TDD. Original specifications for processing of TLR files was provided by SAP team,

copy saved on shared drive.

For SAP SL files:

\\s0028488\5021data$\12-IT\A-ETL\80-Feeds\GII_Gateway\Global\TDD\Connectivity\

R3\FSR_R3_Integration_Req_2.0.xls

For SAP BW files:

\\s0028488\5021data$\12-IT\A-ETL\80-Feeds\GII_Gateway\APAC\Build\ETL 1.0\BW

Specific\TDD\FSR TDD BW-ETL Integration ver 0.2.doc

10 REFERENCE DATA FROM SAP TO ETL ENVIRONMENT (loading

data from SAP tables into Oracle)

Taxonomy - Understanding

Page 39: Taxanomy Kt

FSR

The following are the configuration files in Informatica Server that need to be updated before

the execution of the process.

1. $PM_HOME/saprfc.ini (used in RFC workflows)

2. $PM_HOME/sideinfo (used in workflows with SAP table sources)

The configuration file (system32/saprfc.ini) has to been updated with the connection

parameters provided by SAP in the local system (saprfc.ini sections can be simply copied

from UNIX server). This local configuration is used in design-time by Designer’s wizards.

10.1.1 Creation of Source from SAP database

1. SAP tables can be checked directly in SAP GUI, using se11 transaction (SAP

“transaction” is a subprogram, which provides some functionality like table browsing)

2. Open Source Analyzer in Informatica power center Designer

3. Select Sources -> Import from SAP Metadata

4. Connect String should be declared depending on the environment where it has been

used. Example: dgl1 for development environment

5. Fill in the Username, Password, Client and Language details provided by SAP

6. Declare the Filter Criterion based on the requirement of the data. Here ‘%’ can be

used as a wildcard character

7. Click ‘Connect’ button

8. Select the required table name in the Table tab and enter ‘Add To Import List’ icon

10.1.2 Creation of Target from SAP database

1. Open Warehouse Designer in Informatica power center Designer

2. Drag and drop the required SAP table from the Source SAP table that have already

been created

3. Initially the NOT NULL option in the columns tab will be enabled for all the fields.

Apart from the primary key fields, uncheck all the NOT NULL fields.

4. Select Target -> Generate/Execute SQL -> Connect to the database

5. Select ‘Generate SQL file’

6. Select ‘Edit SQL file’ and check the file that has been generated

7. Select ‘Execute SQL file’, which in turn creates the table

10.1.3 Updation of Reference Data Mapping

1. The m_B1.6_SAP_RDM_REFERENCE_DATA mapping is used for loading the data

from SAP database to the ETL environment

2. Create the mapping with a direct linking of the source to target structures

Taxonomy - Understanding

Page 40: Taxanomy Kt

FSR

3. Select Mappings -> Generate and Install SAP R/3 Code.

(The extraction of data is done by the ABAP program in the SAP side.

ABAP – Advanced Business Application Program)

4. Fill in the Connect string, Username, Password, Client and Language details as given

by SAP. Click on the Connect tab, to get connected to SAP database.

5. Set the Program mode to Stream

We have two program modes.

File Mode: If we select this then the file will be created in the SAP server which has

to be downloaded to our server and process the same. This mode allows shorter on-

line sessions with SAP. But it is not used currently, as this mode requires additional

sftp file transfer from SAP server and more complicated workflows.

Stream Mode: We can directly connect to the source table in the SAP and process

the data. This mode requires longer on-line sessions with SAP, but mappings and

workflows are simplier.

6. Set YYFGLV4 as development class ($TMP class can not be transported between

SAP systems)

7. Select the B1.6 mapping in the ‘list of mappings’ tab and click the ‘Generate Files’

tab. This file can be used for manual checking of ABAP code if Designer reports

some errors during code installation to SAP.

8. Select Direct Installation

9. If ABAP code was not installed for new mapping, session will fail and log-file will

contain message about missing ABAP program

10. If ABAP code was not regenerated after mapping was changed, session will fail and

log-file will contain message about different timestamp for mapping and ABAP code.

Note:

1. If any change is done in the mapping level, then before refreshing in the workflow

level, the SAP R/3 code needs to be regenerated every time.

2. ABAP code can be directly installed only in DEV environment. Updated code should

be copied to UAT/PROD SAP systems via SAP transports (SAP functionality for

copying changes between DEV/UAT/PROD).

3. When copying mappings between repositories (dev/uat/prod), “Copy program

information” option should be checked. This will copy ABAP program information

together with mapping and ensure that mapping’s and ABAP program’s timestamps

are in sync.

Taxonomy - Understanding

Page 41: Taxanomy Kt

FSR

4. The comments given in the Mappings -> Edit -> Comments tab should not be long.

(ABAP code lines are restricted to 72 characters). This is the reason why CVS header

variable is not saved in SAP source mappings.

5. Filter conditions can be set in Application Source Qualifier – Properties Program

Flow Static Filter. This filter should use ABAP syntax and will be included in

generated ABAP code, so filtering will be performed on SAP side.

6. Fields in exported xml-file are CRC-protected, only “businessname” and “description”

can be changed directly in xml-file. This is the reason why CVS variables in SAP

source mappings are moved from metadata extensions to mapping’s description.

Deviation from the above tasks can lead to a failure of the tasks

10.1.4 RFC – PRACTICAL SESSION

The following are the steps to generate the mapping which calls the RFC mapping

1. SAP Function Modules (i.e. RFC) can be checked directly in SAP GUI, using se37

transaction (SAP “transaction” is a subprogram, which provides some functionality

like RFC execution). This subprogram allows checking of RFC parameters and direct

execution in SAP environment, what is useful for debugging purposes.

2. Open Informatica Power Centre Designer -> Mapping Designer

3. Select Mappings -> Generate SAP RFC/BAPI Mapping

4. Provide all the connection details (Connect String, User name, Password, Client,

Language) for SAP connectivity

5. Give the RFC function name in the Filter Criterion, preceded by a ‘%’ as a wildcard

character

6. Click “Connect” button

7. Select the required RFC from ‘Retrieved Functions’ and divert to the ‘Selected

Functions’

8. Go to the ‘Next’ option, and set the $$SAPCONNECTION to the configured

connection parameters in workflow Manager (fixed to FSR_SAP_R3 for our

mappings)

9. Once the mapping is created the input parameters will be passed to the RFC mapplet

as per the SAP Requirements and output parameters will be used for further

processing if required.

10. Created mapping will have virtual sources and targets for RFC input/output. These

virtual objects should be replaced with real sources/targets

11. Due to Informatica bug, repeated mapping generation can produce invalid mapping

with unconnected mapplets. To avoid this bug, previously generated RFC mapplets

Taxonomy - Understanding

Page 42: Taxanomy Kt

FSR

and reusable transformations should be deleted prior to repeated mapping

generation.

Note:

Single Stream RFC is created when RFC has only scalar input/output parameters (i.e. only

one row of data is expected for input/output).

Multiple Stream RFC is created when RFC has table parameters (i.e. requires more than

one row for input or output). Same table parameter can be used both for input and output.

Data is converted to Informatica’s internal format in preparation pipeline and actual RFC call

is performed (using previously prepared data) in functional call pipeline. Intermediate files are

used for passing data between these two pipelines. Indirect Flat file source (which contains

list of actual data files) is used for feeding multiple prepared flat files to functional call

pipeline.

Steps need to be taken care when we use a Multiple Stream for RFC.

1. Target Load Plan (The RfcPreparedData pipeline should be executed before processing

the actual RFC mapplet. Otherwise it may happen that data prepared on previous run will be

used for new RFC call.

2. Parallel Processing. (If we use this session in different workflow’s and filenames are same

don’t process the workflow’s in parallel). Otherwise it may happen that data prepared by

another parallel run will be used for this RFC call.

The following are the steps to be taken care in the workflow level

1. Open Workflow Designer and configure the connection parameters.

Connections-> Application connection Browser

Please refer below for the existing select type.

Taxonomy - Understanding

Page 43: Taxanomy Kt

FSR

Query: For the session configured on SAP REFERENCE data, go to mapping tab

For Source tables -> Readers -> SAP Streaming Reader

What is the difference between SAP Streaming Reader and SAP Staging Reader?

Streaming reader is used for Stream mode ABAP programs (when data is loaded from SAP

in on-line session). Staging reader is used for File mode programs (i.e. mode when file is

created on SAP server and then transferred to Informatica via ftp)

Note: The default value of the fields in the SAP RFC function should not be a pipe character

(‘|’) or hash character (‘#’). “Typing” option for RFC import/export parameters on SAP side

should be set to “TYPE” (“LIKE” parameters can be recognized incorrectly by Informatica

wizard, for example integer can be recognized as char(1) and incorrectly converted in run-

time):

“Typing” option for RFC table parameters on SAP side should be set to “LIKE”:

Taxonomy - Understanding

Page 44: Taxanomy Kt

FSR

On failure of any of the task, an email will be sent to the

[email protected],[email protected] groups

11 Logging Mechanism

During the execution of the workflow, the status details are logged in 2 levels.

1. Database level2. Script level.

11.1.1 Database Level Logging

The database logging happens in the following tables.

1. T_GW_PRC_FILES2. T_GW_PRC_PROCESS3. T_GW_PRC_STAGE

The T_GW_PRC_PROCESS table contains the details of the workflow status specified in

PROCESSING_STATUS column.

S – Started

C – Completed

E – Soft Error

H – Hard Error

The number of soft and hard errors will be recorded with its count in

ERRONEOUS_REC_COUNT column.

Taxonomy - Understanding

Page 45: Taxanomy Kt

FSR

The T_GW_PRC_FILES table contains the status of the files that have been uploaded

through the PROCESSING_STATUS column.

Y – Uploading of files done successfully

N – Uploading operation failed

The reason behind the failure of file uploading will be logged in SAP_ERR_MESSAGE

column.

Query: We could see few columns with a SAP_ERR_MESSAGE, ‘Data file not found’ even

when the PROCESSING_STATUS column has a value ‘Y’. Please clarify.

For PROCESSING_STATUS we are checking only error code returned from SAP. If this

code is 0, PROCESSING_STATUS = Y. Unconnected error code and error message -

possibly it’s bug on SAP side. Latest example on UAT was in April, but if it will be repeated,

clarifications should be requested from SAP team.

The T_GW_PRC_STAGE table contains the status of every session in the process in the

PROCESSING_STATUS column.

S – Started

C –Completed

11.1.2 Script Level Logging

The script level logging is done based on the PROCESS_ID, which is appended before the

file name details.

This log file contains the following details

1. Server details2. Archive directory3. Current working directory (where the script log is currently located during processing)4. Parameter filename along with its path5. Workflow name6. Command for start of workflow7. Execution status details

Taxonomy - Understanding

Page 46: Taxanomy Kt

FSR

Queries:

1. What are the possible values that the execution status might have in the script log?

Execution status mentioned in screenshot above is returned from pmcmd command.

Possible return codes are listed in Informatica help (Workflow Administration Guide

Chapter 23: Using pmcmd Using the Command Line Mode pmcmd Return Codes):

Code Description

0

For all commands, a return value of zero indicates that the command ran

successfully. You can issue these commands in the wait or nowait mode: starttask,

startworkflow, resumeworklet, resumeworkflow, aborttask, and abortworkflow. If you

issue a command in the wait mode, a return value of zero indicates the command ran

successfully. If you issue a command in the nowait mode, a return value of zero

indicates that the request was successfully transmitted to the PowerCenter Server,

and it acknowledged the request.

1The PowerCenter Server is down, or pmcmd cannot connect to the PowerCenter

Server. The TCP/IP host name or port number or a network problem occurred.

2 The specified task name, workflow name, or folder name does not exist.

3 An error occurred in starting or running the workflow or task.

4 Usage error. You passed the wrong parameters to pmcmd.

5 An internal pmcmd error occurred. Contact Informatica Technical Support.

6An error occurred while stopping the PowerCenter Server. Contact Informatica

Technical Support.

7 You used an invalid username or password.

8 You do not have the appropriate permissions or privileges to perform this task.

9 The connection to the PowerCenter Server timed out while sending the request.

12

The PowerCenter Server cannot start recovery because the session or workflow is

scheduled, suspending, waiting for an event, waiting, initializing, aborting, stopping,

disabled, or running.

13 The username environment variable is not defined.

14 The password environment variable is not defined.

15 The username environment variable is missing.

16 The password environment variable is missing.

Taxonomy - Understanding

Page 47: Taxanomy Kt

FSR

17 Parameter file does not exist.

18The PowerCenter Server found the parameter file, but it did not have the initial values

for the session parameters, such as $input or $output.

19The PowerCenter Server cannot start the session in recovery mode because the

workflow is configured to run continuously.

20

A repository error has occurred. Please make sure that the Repository Server and the

database are running and the number of connections to the database is not

exceeded.

21 PowerCenter Server is shutting down and it is not accepting new requests.

22The PowerCenter Server cannot find a unique instance of workflow/session you

specified. Enter the command again with the folder name and workflow name.

23 There is no data available for your request.

24 Out of memory.

25 Command is cancelled.

2. Why do some log files have missing PROCESS_ID?

These are log-files from gw_start_single_task.sh. Process id is taken from dynamic

parameter file, which is provided in command-line parameters. Missing process id means

that there were some errors in Session which prepares dynamic parameter file or

gw_start_single_task.sh script was executed manually without proper parameter file.

Taxonomy - Understanding

Page 48: Taxanomy Kt

FSR

12 Technical Error Handling and Tivoli Alerting

12.1.1 Technical Error Handling

The possible errors that can occur during the processing can be due to

1. Informatica Errors2. Database errors3. Connectivity errors

The occurrence of the error is logged in the script logs that can be found in

$PM_HOME/scriptlogs/<feed_specific_dir_name>

The workflow or session logs can also be viewed through the Workflow Monitor.

Note that if processing was restarted, previous log-files will be overwritten on each run.

Occurrence of any errors discontinues executing the normal flow and it takes the Tivoli alert

notification path.

12.1.2 Tivoli Alerting

Failure of any task in the workflow will lead to execution of the Tivoli worklet path.

The DBF_WL_TIV_WFLOWERROR worklet is used to raise Tivoli alerts. The ‘Always on’

(parent) workflows are configured in such a way that all the tasks in the flow are connected to

this Tivoli worklet during a failure condition. Hence whenever a session gets failed, it stops

continuing in the regular flow, and takes the Tivoli path.

This DBF_WL_TIV_WFLOWERROR worklet consists of the following tasks.

1. DBF_MAP_TIV_WFLOWERROR (Session)2. DBF_CMD_LOG (Command Task)

The session DBF_MAP_TIV_WFLOWERROR uses a parameter file dbf_wflow.prm, where

an entry needs to be made for the workflow for which an alert needs to be raised in case of

failure.

Location: $PMRootDir\PrmFiles\dbf_wflow.prm

It is also possible to use ‘Always on’ workflows’ parameter files for setting $$WFLOW

variable (instead of dbf_wflow.prm).

Location: $PMRootDir\PrmFiles\gw_apac_1\Start_*.prm

Taxonomy - Understanding

Page 49: Taxanomy Kt

FSR

Sample Entry:

[<Folder_Name>.WF:<Workflow_Name>.WT:DBF_WL_TIV_WFOLWERROR.ST:DBF_SES

S_TIV_WFLOWERROR]

$$WFLOW='<Workflow_Name>'

Also, metadata describing the workflow details need to be entered in the following tables.

1. DBF_T_WORKFLOW2. DBF_T_MODULE_WORKFLOW

The DBFEEDS_ID in DBF_T_WORKFLOW needs to be the same as the WORKFLOW_ID

present in DBF_T_MODULE_WORKFLOW table, for a particular workflow.

For inserting metadata in Tivoli configuration tables, it’s also possible to use stored

procedure:

SQL used for GW_APAC_1 workflows:

DECLARE

BEGIN

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-Portal', 'wf_Start_WF_PORTAL_FLA');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-Portal', 'wf_Start_WF_PORTAL_MAN');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-Portal', 'wf_Start_WF_PORTAL_NFI');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-1', 'wf_Start_WF_ACS');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-1', 'wf_Start_WF_ACS_DDMA');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-1', 'wf_Start_WF_ACS_DDMM');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-1', 'wf_Start_WF_DW');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-1', 'wf_Start_WF_EP');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-1', 'wf_Start_WF_IDMS');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-1', 'wf_Start_WF_RMS');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-1', 'wf_Start_WF_SUMMIT');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-2', 'wf_Start_WF_ETL_2');

FEED_OWNER.DBF_PR_INSERT_METADATA('GW_APAC_1', 'FSR ETL-Portal', 'wf_PORTAL_CM_SUBSCRIBER');

END;

Sample Tivoli Alert Message:

UAT :[<Sysdate>] WORKFLOW_ERROR: Execution of <Workflow_name> Failed [MODULE:

<Module_name>]

This alert message contains the details about the failed parent workflow name. The session

level details in the workflow that got failed can be checked through the script log file or in

workflow monitor. In addition to this Tivoli alert, a failure notification will be sent through mail,

by the em_FailureNotification reusable email task, to the intended recipients. List of

recipients is configured in em_FailureNotification parameters.

Taxonomy - Understanding

Page 50: Taxanomy Kt

FSR

em_FailureNotification will be triggered twice – on session level (as ‘On Failure E-Mail’ in

failed session) and on parent workflow level (as next task after Tivoli worklet).

The child workflows that are started through the gw_start_seq_tasks.sh (ETL1/ETL2

processing) or gw_start_single_task.sh (PORTAL processing), will return a non-zero value in

case of errors. Command task will return non-zero values and this will in turn raise a Tivoli

alert, with the parent workflow name.

Taxonomy - Understanding