Sunday, 11 December 2011

Loading to EPMA planning applications using interface tables – Part 6

In the last part of the series I went through the steps using the command line driven EPMA batch client to import from interface tables and then deploy to planning.

Today I am going to look at using ODI to control running of the batch client and add in some error handling functionality. Unfortunately there are no adaptors or API available to use with ODI and EPMA so I am going to use the OS command tool to run the EPMA batch client.

If you look back to part 4 of this series I had created a number of interfaces that loaded the planning metadata into the EPMA interface tables.


I ended up with the above package which I will expand on to bring in the batch client functionality.

Usually when designing packages or using tool like the OS command one I would add  in control using variables and not really hardcode but for today’s exercise I will just keep it simple which should make it easier to read.

I must also point out that my methods are not the only way of implementing and you may think to yourself there are better ways of achieving the end result and that is the beauty of using ODI as there are not always a right and wrong method. If you do feel you have come up with a better solution then feel free to get in touch to discuss.

If you are going to use the OS command with the EPMA batch client then the ODI agent will require access to the client, if you look back at the last part I gave advice on how to install the client on a supported OS so you have the option of either installing the client on a machine with an agent or install an agent on a machine hosting the client.


If you drag the OS command tool on the diagram in a package you will see there are a number of parameters to set, the most important parameter is “Command to execute” which is basically what you want ODI to execute at command line.

In the last part I executed the following from command line to import metadata from Interface tables to EPMA.

E:\Oracle\Middleware\EPMSystem11R1\products\Foundation\BPMA\EPMABatchClient\epma-batch-client.bat -C"F:\Scripts\Import.txt" -R"F:\Scripts\Output.log" -Uadmin -Ppassword

I am going to use the same import script and log so it is just a matter of transferring the command line to the parameter value.



The working directory has been set to the location of the batch client.

The command to execute calls the client and passes in parameters for the script to use, the log to output to and the account details.
Synchronous is set to Yes so that ODI will wait for a return code before continuing.

I have left the other parameters such as Output file and Error file as blank because the batch client will output all the information to the file Output.log


The EPMA batch client script is the same as from the last part in the series and logs in and imports from interface tables to EPMA based on a profile “SAMPLE_APP”

If you wanted to get clever you could actually create the script on the fly but that is not the objective for today.


As I mentioned earlier I would normally use variables like the example above, these can be set a runtime and make the integrations much more versatile.

So there we go end of blog, not so quick, what happens if there are errors in the output file generated by the batch client.

There are a number of different factors which can generate errors or failures.


A fatal error will cause the package to fail and in the example above I changed the working directory to an invalid location.

Fatal errors should be rare once you have defined all the correct parameters and can be acted upon easily e.g. send an email if the OS command fails or just simply check the Operator.

If say an incorrect password or profile is used then step will not fail.


In the example above I changed the import profile in the script to an invalid one, as you can see the step is successful but it has failed from viewing the log.

Now there are lots of different ways of dealing with this error e.g. create a batch script with error handling, reading the return code, parsing the file….

I am going to use a different method to the ones mentioned whether this is the right option or the best option then I don’t think I can answer that, I am sure somebody will come up with an annoying best practice statement to try and define what is correct.

If you take a look at the log file you will see that it is in a fixed format meaning that the start of each column is known by its position.

Now the idea is to load this log file into a table and then query it to see if there was failure which is defined by an ERROR line in the log.

First of I created a new file Data Store and pointed it to the Output.log file


The File Format was set to Fixed and no header line.


As the columns are fixed I could set the Start position of each column, I set the message column to a length of 200 which should be plenty


Viewing the Data Store shows that the columns are being separated correctly.

The next step was to create a simple interface that loads the log information from into a table.


The interface using “LKM File to SQL” and “IKM SQL Control Append” with the option to truncate the table each time it is run.

Now the log information is the table I created a refreshing variable to check if there was a failure.


When executed the number of errors will be stored in the variable, if the number is greater than zero then the script failed.


These are the steps in the package up to now, the first step uses the File delete tool and deletes the output log, this is because the EPMA batch client appends to the log and only the information from the last run is required.

The next step uses the OS command tool and calls the EPMA batch client and defined script, the output log is loaded to table and the refreshing variable stores the amount of errors.

The variable holding the numbers of errors can now be evaluated to see if the value is greater than zero.


If the value is greater than zero the step will complete with a success otherwise produce a failure so the output defines how to proceed.

I have decided that if there has been a failure then I want to store the failure message and then raise an error with the stored message which will cause the package to fail.




Another refreshing variable was created which queries the table holding the log information and stores the error message.


A new procedure was created with a step using Jython as the technology.
An error is raised with the message which has just been stored using the refreshing variable.

These steps are then added to the package, to add extra functionality an additional step could easily be inserted to send out a failure email and attach the output log.


If there any fatal errors in the log then the package will fail with a message informing of the reason.

You could move on deploying the EPMA application to planning using the same type of logic that I have used up to now in the package but there may be a possibility that there were no fatal errors in loading from the interface tables but there were normal errors generated, if there were errors you might not want to deploy the application just yet.


In this example I changed a member’s property so that it was set to a non-existent Smart List and ran the package again, the package was successful but there is an error in the log.

I don’t want the package to fail and I don’t want the application to be deployed but instead an email sent out informing of the error.


Time for another refreshing variable which will look for the line with the Import details and store the number defining the number of errors, I am only storing the first digit as I only interested to know whether it is greater than zero.

Like previously the variable can be evaluated and then steps adding depending on whether the result is true or false.


So now the log will be checked to see if there are any import errors, if there are errors then an email is sent attaching the log for information and the integration then ends, if there are no import errors the EPMA application is deployed to planning using an OS command.

I am not going to cover the deployment as it uses the same logic as the import and in the last part of the series I went through how to script it.




I added in the metadata load interfaces and some more additional error logging around the deployment from EPMA to planning which means in the end you have a solution that will take metadata from a source and end up to deploying to planning/essbase with relatively little coding that can then be easily scheduled or executed the web using the console.

Obviously you can make it a lot more sophisticated with added functionality and make more use of variables but hopefully you get the idea.

If the package does get complex and you are using ODI 11.1.1.5+ then you could look into whether using load plans would add any benefit.

I am going to leave it there for today, this was going to be the last part but I have had a few queries around using interface which I may cover if I find the time.

5 comments:

  1. ***** (five star) post!

    This really helps! Thanks for sharing :)

    ReplyDelete
  2. John,
    First of all Excellent information you have cooked for all of us.
    A question
    Were you able to find the parameter for Interface LoadID !!!
    I end up raising Service Request with Oracle related to this.
    If you know by this time let us know, otherwise i will comment back on this blog once i get information on the Parameter Interface LoadId for Execute Import using EPMA batch client. I have to load multiple dimensions , for now i have a work around to achieve this. Work around is iam managing such a way that i only have a dimension specific info in interface tables at a time..

    ReplyDelete
  3. A Great blog, helped me finish my task. Thanks for your detailed info.
    Blog is exceptional.

    ReplyDelete
  4. Hi John,

    Thanks for the great blog. I have completed a full integration just as you did only to get shocked. What if a user logs in to EPM workspace and decided to delete a member from the shared library. Is there an automated way to reflect this back to ODI so as to ignore this member in the next load to shared library

    Cheers,

    ReplyDelete
  5. Hi John,

    Please can you tell me where does Hyperion store the shared library dimension hierarchies since it doesn't go to Essbase.

    Cheers,

    ReplyDelete

Note: only a member of this blog may post a comment.