maxl

14
RENCHES MaxL Essbase automation patterns: moving data from one cube to another A very common task for Essbase automation is to move data from one cube to another. There are a number of reasons you may want or need to do this. One, you may have a cube that has detailed data and another cube with higher level data, and you want to move the sums or other calculations from one to the other. You may accept budget inputs in one cube but need to push them over to another cube. You may need to move data from a “current year” cube to a “prior year” cube (a data export or cube copy may be more appropriate, but that’s another topic). In any case, there are many reasons. For the purposes of our discussion, the Source cube is the cube with the data already in it, and the Target cube

Upload: venkat

Post on 25-Sep-2015

213 views

Category:

Documents


1 download

TRANSCRIPT

RENCHES

MaxL Essbase automation patterns: moving data from one cube to another

A very common task for Essbase automation is to move data from one cube to another. There are a number of reasons you may want or need to do this. One, you may have a cube that has detailed data and another cube with higher level data, and you want to move the sums or other calculations from one to the other. You may accept budget inputs in one cube but need to push them over to another cube. You may need to move data from a current year cube to a prior year cube (a data export or cube copy may be more appropriate, but thats another topic). In any case, there are many reasons.

For the purposes of our discussion, the Source cube is the cube with the data already in it, and the Target cube is the cube that is to be loaded with data from the source cube. There is a simple automation strategy at the heart of all these tasks:

1. Calculate the source cube (if needed)

2. Run a Report script on the source cube, outputting to a file

3. Load the output from the report script to the target cube with a load rule

4. Calculate the target cube

This can be done by hand, of course (through EAS), or you can do what the rest of us lazy cube monkeys do, and automate it. First of all, lets take a look at a hypothetical setup:

We will have an application/database called Source.Foo which represents our source cube. It will have dimensions and members as follows:

Location: North, East, South, West

Time: January, February, , November, December

Measures: Sales, LaborHours, LaborWages

As you can see, this is a very simple outline. For the sake of simplicity I have not included any rollups, like having Q1/1st Quarter for January, February, and March. For our purposes, the target cube, Target.Bar, has an outline as follows:

Scenario: Actual, Budget, Forecast

Time: February, , November, December

Measures: Sales, LaborHours, LaborWages

These outlines are similar but different. This cube has a Scenario dimension with Actual, Budget, and Forecast (whereas in the source cube, since it is for budgeting only, everything is assumed to be Budget). Also note that Target.Bar does not have a Location dimension, instead, this cube only concerns itself with totals for all regions. Looking back at our original thoughts on automation, in order for us to move the data from Source.Foo to Target.Bar, we need to calculate it (to roll-up all of the data for the Locations), run a report script that will output the data how we need it for Target.Bar, use a load rule on Target.Bar to load the data, and then calculate Target.Bar. Of course, business needs will affect the exact implementation of this operation, such as the timing, the calculation to use, and other complexities that may arise. You may actually have two cubes that dont have a lot in common (dimensionally speaking), in which case, your load rule might need to really jump through some hoops.

Well keep this example really simple though. Well also assume that the automation is being run from a Windows server, so we have a batch file to kick things off:

cd /d %~dp0

essmsh ExportAndLoadBudgetData.msh

I use the cd /d %~dp0 on some of my systems as a shortcut to switch the to current directory, since the particular automation tool installed does not set the home directory of the file to the current working directory. Then we invoke the MaxL shell (essmsh, which is in the PATH) and run ExportAndLoadBudgetData.msh. I enjoy giving my automation files unnecessarily long filenames. It makes me feel smarter.

As you may have seen from an earlier post, I like to modularize my MaxL scripts to hide/centralize configuration settings, but again, for the sake of simplicity, this example will forgo that. Here is what ExportAndLoadBudgetData.msh could look like:

/* Copies data from the Budget cube (Source.Foo) to the Budget Scenario

of Target.Bar */

/* your very standard login sequence here */

login AdminUser identified by AdminPw on EssbaseServer;

/* at this point you may want to turn spooling on (omitted here) */

/* disable connections to the application -- this is optional */

alter application Source disable connects;

/* PrepExp is a Calc script that lives in Source.Foo and for the purposes

of this example, all it does is makes sure that the aggregations that are

to be exported in the following report script are ready. This may not be

necessary and it may be as simple as a CALC ALL; */

execute calculation Source.Foo.PrepExp;

/* Budget is the name of the report script that runs on Source.Foo and outputs a

text file that is to be read by Target.Bar's LoadBud rules file */

export database Source.Foo

using report_file 'Budget'

to data_file 'foo.txt';

/* enable connections, if they were disabled above */

alter application Source enable connects;

/* again, technically this is optional but you'll probably want it */

alter application Target disable connects;

/* this may not be necessary but the purpose of the script is to clear out

the budget data, under the assumption that we are completely reloading the

data that is contained in the report script output */

execute calculation Target.Bar.ClearBud;

/* now we import the data from the foo.txt file created earlier. Errors

(rejected records) will be sent to errors.txt */

import database Target.Bar data

from data_file 'foo.txt'

using rules_file 'LoadBud'

on error write to 'errors.txt';

/* calculate the new data (may not be necessary depending on what the input

format is, but in this example it's necessary */

execute calculation Target.Bar.CalcAll;

/* enable connections if disabled earlier */

alter application Target enable connects;

/* boilerplate cleanup. Turn off spooling if turned on earlier */

logoff;

exit;

At this point , if we dont have them already, we would need to go design the aggregation calc script for Source.Foo (PrepExp.csc), the report script for Source.Foo (Budget.rep), the clearing calc script on Target.Bar (ClearBud.csc), the load rule on Target.Bar (LoadBud.rul), and the final rollup calc script (CalcAll.csc). Some of these may be omitted if they are not necessary for the particular process (you may opt to use the default calc script, may not need some of the aggregations, etc).

For our purposes we will just say that the PrepExp and CalcAll calc scripts are just a CALC ALL or the default calc. You may want a tighter calc script, that is, you may want to design the calc script to run faster by way of helping Essbase understand what you need to calculate and in what order.

What does the report script look like? We just need something to take the data in the cube and dump it to a raw text file.