Saturday, 31 December 2011

Planning - loading text member data update

A while back there was a post on the planning forum about loading text member data to a planning application using the Outline load utility, there was a reply on the post saying it was not possible in 11.1.2.x and an SR was raised and it had been confirmed as a bug. I knew it was possible in previous versions so I was going to test it out but never got round to it.

Then recently I received an email asking if it was possible to load text data using ODI to planning 11.1.2.1 as once again an SR had been raised and it had been confirmed as a bug. I knew I had definitely loaded text data when working on a previous project but that was with planning 11.1.1.3

I did write a blog on using an alternative method to loading text data because that was before it was possible directly with the ODI planning adaptor, I think it wasn’t possible in the first releases of 11 but it certainly was possible in 11.1.1.3

I thought I had best test out both the Outline load utility and ODI to see if it is possible to load text member data to a 11.1.2.1 planning application.

I first created an account member called TextMember1 with a data type of text.


I constructed a simple form so that I could view the results of load text data to the selected POV.


I set up the Data Load Settings in planning to match how I had set up the form with the Data Load Dimension as Period, Driver Dimension as Account with the member TextMember1


I made sure the Data Load Settings had definitely been correctly saved by running a SQL query against the planning applications relational tables.


I created a CSV file in the format required for the Outline load utility and populated it with the same POV details as the planning web form.


OutlineLoad /A:PLANSAMP /U:admin /M /I:textload.csv /D:Period /L:dataload.log /X:dataload.exc

The Outline load utility was run from command line using the load dimension set as Period to match the Data Load settings and CSV file.
 

The output log from the utility showed that 1 record was successfully processed and loaded.


The planning web form was run again and the text data was displayed successfully.

I know that you can also use the Outline utility to load data without having to set the Data Load setting in planning and specify the driver information directly in the source file so I thought I would give that a try as well.


I created and populated a new file using the Driver Member and Value column headings.


OutlineLoad /A:PLANSAMP /U:admin /M /I:textload2.csv /TR /L:dataload.log /X:dataload.exc

I successfully ran the utility again but this time removing the load dimension /D and used /TR which means the driver information is specified in the file.
 

The form displayed the newly loaded text data.

So no problems loading text data using the Outline load utility so time to move on to ODI, the email I received specified ODI 10.1.3.6.x with planning 11.1.2.1 so that is what my first test would be with.
 

I reversed the planning application and checked the columns “Data Load Cube Name”, “TextMember1” and “Point-of-View” had been added to the Period Datastore.


I created a file Datastore against the CSV file I used with the Outline load utility.


I created a new interface with the CSV file as the source and the Period dimension as the target, the target columns were then mapped to the source.

The flow was set as LKM File to SQL, the memory engine as the staging area and then “IKM SQL to Hyperion Planning”


The interface executed successfully and the web form was reloaded to show the correct text data.

So no problems using ODI 10g how about ODI 11.1.1.5


I replicated everything I had created in ODI 10g but this time used Oracle as the staging area due a bug using the memory engine with planning.
 

The interface ran with no problems and once again the correct text data was shown in the planning web form.

All testing was successful so at least I can put my mind at rest on this subject.

Sunday, 11 December 2011

Loading to EPMA planning applications using interface tables – Part 6

In the last part of the series I went through the steps using the command line driven EPMA batch client to import from interface tables and then deploy to planning.

Today I am going to look at using ODI to control running of the batch client and add in some error handling functionality. Unfortunately there are no adaptors or API available to use with ODI and EPMA so I am going to use the OS command tool to run the EPMA batch client.

If you look back to part 4 of this series I had created a number of interfaces that loaded the planning metadata into the EPMA interface tables.


I ended up with the above package which I will expand on to bring in the batch client functionality.

Usually when designing packages or using tool like the OS command one I would add  in control using variables and not really hardcode but for today’s exercise I will just keep it simple which should make it easier to read.

I must also point out that my methods are not the only way of implementing and you may think to yourself there are better ways of achieving the end result and that is the beauty of using ODI as there are not always a right and wrong method. If you do feel you have come up with a better solution then feel free to get in touch to discuss.

If you are going to use the OS command with the EPMA batch client then the ODI agent will require access to the client, if you look back at the last part I gave advice on how to install the client on a supported OS so you have the option of either installing the client on a machine with an agent or install an agent on a machine hosting the client.


If you drag the OS command tool on the diagram in a package you will see there are a number of parameters to set, the most important parameter is “Command to execute” which is basically what you want ODI to execute at command line.

In the last part I executed the following from command line to import metadata from Interface tables to EPMA.

E:\Oracle\Middleware\EPMSystem11R1\products\Foundation\BPMA\EPMABatchClient\epma-batch-client.bat -C"F:\Scripts\Import.txt" -R"F:\Scripts\Output.log" -Uadmin -Ppassword

I am going to use the same import script and log so it is just a matter of transferring the command line to the parameter value.



The working directory has been set to the location of the batch client.

The command to execute calls the client and passes in parameters for the script to use, the log to output to and the account details.
Synchronous is set to Yes so that ODI will wait for a return code before continuing.

I have left the other parameters such as Output file and Error file as blank because the batch client will output all the information to the file Output.log


The EPMA batch client script is the same as from the last part in the series and logs in and imports from interface tables to EPMA based on a profile “SAMPLE_APP”

If you wanted to get clever you could actually create the script on the fly but that is not the objective for today.


As I mentioned earlier I would normally use variables like the example above, these can be set a runtime and make the integrations much more versatile.

So there we go end of blog, not so quick, what happens if there are errors in the output file generated by the batch client.

There are a number of different factors which can generate errors or failures.


A fatal error will cause the package to fail and in the example above I changed the working directory to an invalid location.

Fatal errors should be rare once you have defined all the correct parameters and can be acted upon easily e.g. send an email if the OS command fails or just simply check the Operator.

If say an incorrect password or profile is used then step will not fail.


In the example above I changed the import profile in the script to an invalid one, as you can see the step is successful but it has failed from viewing the log.

Now there are lots of different ways of dealing with this error e.g. create a batch script with error handling, reading the return code, parsing the file….

I am going to use a different method to the ones mentioned whether this is the right option or the best option then I don’t think I can answer that, I am sure somebody will come up with an annoying best practice statement to try and define what is correct.

If you take a look at the log file you will see that it is in a fixed format meaning that the start of each column is known by its position.

Now the idea is to load this log file into a table and then query it to see if there was failure which is defined by an ERROR line in the log.

First of I created a new file Data Store and pointed it to the Output.log file


The File Format was set to Fixed and no header line.


As the columns are fixed I could set the Start position of each column, I set the message column to a length of 200 which should be plenty


Viewing the Data Store shows that the columns are being separated correctly.

The next step was to create a simple interface that loads the log information from into a table.


The interface using “LKM File to SQL” and “IKM SQL Control Append” with the option to truncate the table each time it is run.

Now the log information is the table I created a refreshing variable to check if there was a failure.


When executed the number of errors will be stored in the variable, if the number is greater than zero then the script failed.


These are the steps in the package up to now, the first step uses the File delete tool and deletes the output log, this is because the EPMA batch client appends to the log and only the information from the last run is required.

The next step uses the OS command tool and calls the EPMA batch client and defined script, the output log is loaded to table and the refreshing variable stores the amount of errors.

The variable holding the numbers of errors can now be evaluated to see if the value is greater than zero.


If the value is greater than zero the step will complete with a success otherwise produce a failure so the output defines how to proceed.

I have decided that if there has been a failure then I want to store the failure message and then raise an error with the stored message which will cause the package to fail.




Another refreshing variable was created which queries the table holding the log information and stores the error message.


A new procedure was created with a step using Jython as the technology.
An error is raised with the message which has just been stored using the refreshing variable.

These steps are then added to the package, to add extra functionality an additional step could easily be inserted to send out a failure email and attach the output log.


If there any fatal errors in the log then the package will fail with a message informing of the reason.

You could move on deploying the EPMA application to planning using the same type of logic that I have used up to now in the package but there may be a possibility that there were no fatal errors in loading from the interface tables but there were normal errors generated, if there were errors you might not want to deploy the application just yet.


In this example I changed a member’s property so that it was set to a non-existent Smart List and ran the package again, the package was successful but there is an error in the log.

I don’t want the package to fail and I don’t want the application to be deployed but instead an email sent out informing of the error.


Time for another refreshing variable which will look for the line with the Import details and store the number defining the number of errors, I am only storing the first digit as I only interested to know whether it is greater than zero.

Like previously the variable can be evaluated and then steps adding depending on whether the result is true or false.


So now the log will be checked to see if there are any import errors, if there are errors then an email is sent attaching the log for information and the integration then ends, if there are no import errors the EPMA application is deployed to planning using an OS command.

I am not going to cover the deployment as it uses the same logic as the import and in the last part of the series I went through how to script it.




I added in the metadata load interfaces and some more additional error logging around the deployment from EPMA to planning which means in the end you have a solution that will take metadata from a source and end up to deploying to planning/essbase with relatively little coding that can then be easily scheduled or executed the web using the console.

Obviously you can make it a lot more sophisticated with added functionality and make more use of variables but hopefully you get the idea.

If the package does get complex and you are using ODI 11.1.1.5+ then you could look into whether using load plans would add any benefit.

I am going to leave it there for today, this was going to be the last part but I have had a few queries around using interface which I may cover if I find the time.

Sunday, 27 November 2011

Loading to EPMA planning applications using interface tables – Part 5

In the last part of the series I had pretty much achieved what I set out to do and that was load planning metadata from a source into interface tables and then push into EPMA. The last step of moving the metadata from the interface tables to EPMA was a manual process and in an ideal world you would want this to be automated.

Today I am going to look at automating the process using the EPMA batch client and then move on to using ODI to take over control.

To be honest I am not a big fan of the EPMA batch client and when you compare it to say the outlineload utility it seems so far behind.

I feel that Oracle really needs to develop an API or improve the web services to control EPMA functionality.

When Oracle first released EPMA there was documentation available on the web services but this seems to have disappeared even though the product is still using web services behind the scenes.
Enough of my moaning and let’s get on with the batch client.

The client is usually available on the machine hosting EPMA e.g. on 11.1.2.1 <MIDDLWARE_HOME>\EPMSystem11R1\products\Foundation\BPMA\EPMABatchClient\


It is possible to install the client on a machine of your choice but you will need a number of installation files.
  • Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition Release 11.1.2.1.0
  • Oracle Hyperion Foundation Services Release 11.1.2.1.0 Part 1 of 7
  • Oracle Hyperion Foundation Services Release 11.1.2.1.0 Part 5 of 7
  • Oracle Hyperion Enterprise Performance Management Architect, Fusion Edition Release 11.1.2.1.0


The documentation for the client is available here

The client is launched by running epma-batch-client either using interactive command line or a script.

 

Using the interactive command line is a good way to make sure you are using all the correct commands and set all the parameters before putting them into script.

My intention is to use a script as interactive command line is back to the manual world again.

There are a number of command line arguments that can be used with the client and if you call it with –H it will provide help on the available arguments.


The arguments I am interested in using are

-C is the filename of a script to be used e.g. -C"F:\Scripts\Import.txt"

-R is the filename to output the results of the script to e.g. -R"F:\Scripts\Import.log"

-U is the username to log into EPMA e.g. -Uadmin

-P is the password to log into EPMA e.g. –Ppassword

That is straightforward enough so moving on to the script; there are two important variables that are required to be set before using any client commands.

set bpmaserverurl=http://<servername>/hyperion-bpma-server;
set workspaceurl=http://<servername>:<port>/workspace;


Basically these system variables need to be set to tell the client the location of workspace and EPMA, if you don’t use them the script will fail with a message informing they are missing.
It is important to separate each command with a semi colon.

The next step in the script is to log into EPMA and as I am passing the username and password into the client I only need to use the following

Login;

Once logged in I want to import the metadata from the interface tables and push that into EPMA which is possible using the “Execute Import ..” command

The format of the command is

Execute Import
Parameters(importtype, profilename, filename, waitforcompletion)Values(‘’, ‘’, '’, ‘’);


importtype – set the type of import and the available values are
  • FlatFile
  • InterfaceTables

profilename—The name of an existing import profile

filename – If using Flat files as the import then it is the name of the import file.

waitforcompletion - If set to true, the Batch Client waits for the job to finish. If set to false, the Batch Client submits the job and continues.

Now this is where I was a bit confused because all through this series I have been using a Load ID with the interface tables, even when you manually import through EPMA web it requires a Load ID but it is nowhere to be seen in the command parameters, if I am not missing something obvious then how is it possible to set which Load ID to use and what does the client use as a default?

As I only have one Load ID at present I will just carry on and populate the parameters with known values, the importtype will be InterfaceTables and the profilename which I have been using in this series is SAMPLE_APP

Execute Import Parameters(importtype, profilename, filename, waitforcompletion)Values('InterfaceTables', 'SAMPLE_APP', '', 'true');


So the script to import from interface tables to EPMA is

Using command line the following can be run

E:\Oracle\Middleware\EPMSystem11R1\products\Foundation\BPMA\EPMABatchClient\epma-batch-client.bat -C"F:\Scripts\Import.txt" -R"F:\Scripts\Import.log" -Uadmin -Ppassword


 Which produces an output log of


The output log is exactly what is outputted to the command window, as you can see on one of the lines it completed with a code of 0 which means it was successful.

There is a list of return codes available here

If I update the script to put in an invalid profile name and the run script again the output would be


 When using a batch script to call the EPMA client it is possible to check the return codes and act on them, for instance using command line in windows you could use something like

IF ERRORLEVEL 0 goto COMPLETED_SUCCESS
IF ERRORLEVEL 407 goto IMPORT_FAILED



:COMPLETED_SUCCESS
do something....

:IMPORT_FAILED
do something....



 The output log also has a link to the job console where the status for import can be viewed.

Now that the metadata has been successfully loaded into EPMA then it is possible to deploy the application to planning and push the outline changes to essbase.

The commands available are “Execute Deployment” and “Execute Redeployment”.

The deployment command has the following format if being used with planning

Execute Deploy Parameters (ApplicationName, InstanceName, ApplicationServer, HubProject, ClearAll, CheckIntegrity, waitforcompletion, purgeTransactions,  deployOption, datasourceName,CreateOutline,RefreshOutline,CreateSecurityFilters,SharedMembersSecurityFilters,ValidateSecurityFilterLimit,Notes)
Values(….);


ApplicationName—The name of an existing application to deploy to.

InstanceName—Name of the instance to deploy to.

ApplicationServer—Name of the application server to deploy to.

HubProject—Shared Services project to use.

ClearAll— This is for HFM

CheckIntegrity—This is for HFM

PurgeTransactions—Purges the transaction history. A value of True will purge the history, and a value of False will not.

deployOption—Specifies the type of deployment that you want to perform, available values are
  • AppView—Deploy the application only. (This setting is the default.)
  • Rules—Deploy calculation rules.
  • All—Deploy calculation rules and the application.
WaitForCompletion—If set to True, the Batch Client waits for the job to finish. If set to False, the Batch Client submits the job and continues.

datasourceName—The data source to be used for the deployment

CreateOutline—Creates the Essbase outline, values : True or False

RefreshOutline—Refreshes the essbase database outlines, values: True or False

CreateSecurityFilters— Create essbase database filters based on access permissions in planning, values: True or False

SharedMembersSecurityFilters—Evaluates access permissions that are set for all instances of the member (base and shared) and applies the least restrictive access permission to them all, values: True or False

ValidateSecurityFilterLimit—Identifies security filters that exceed the Oracle Essbase security filter limit of 64 KB per row, values: True or False

Notes—Optionally stores any comments or notes that you may want to add regarding the deployment.

I find that the easiest way to find out what values to put against the parameters is to initiate a deployment of the application from EPMA.


The deployment window gives you a good indication on the values to populate the command with.

Execute Deploy Parameters(ApplicationName, InstanceName, ApplicationServer, HubProject, ClearAll, CheckIntegrity, waitforcompletion, purgeTransactions,  deployOption, datasourceName,CreateOutline,RefreshOutline,CreateSecurityFilters,SharedMembersSecurityFilters,ValidateSecurityFilterLimit,Notes) Values ('EPMASAMP', 'Default', 'Default', 'Planning', 'false', 'false', 'true', 'true', 'AppView', 'EPMASAMP','false','true','false','false','false','EPMASAMP deployment');
 

 This command can then be added to the existing script and executed.


 The deployment was successful and once again a link is provided to view the deployment status in the job console.


 If there any issues with the deployment then they will be logged.


A return code of 405 is generated which could be acted upon if the process is being scripted.

In the next part of this series I will look at using ODI to control the running of the batch client and provide relevant error handling steps, this will then be added to the package which was created in the last instalment to provide a full automated EPMA interface load and deploy process.