Sunday, 27 November 2011

Loading to EPMA planning applications using interface tables – Part 5

In the last part of the series I had pretty much achieved what I set out to do and that was load planning metadata from a source into interface tables and then push into EPMA. The last step of moving the metadata from the interface tables to EPMA was a manual process and in an ideal world you would want this to be automated.

Today I am going to look at automating the process using the EPMA batch client and then move on to using ODI to take over control.

To be honest I am not a big fan of the EPMA batch client and when you compare it to say the outlineload utility it seems so far behind.

I feel that Oracle really needs to develop an API or improve the web services to control EPMA functionality.

When Oracle first released EPMA there was documentation available on the web services but this seems to have disappeared even though the product is still using web services behind the scenes.
Enough of my moaning and let’s get on with the batch client.

The client is usually available on the machine hosting EPMA e.g. on 11.1.2.1 <MIDDLWARE_HOME>\EPMSystem11R1\products\Foundation\BPMA\EPMABatchClient\


It is possible to install the client on a machine of your choice but you will need a number of installation files.
  • Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition Release 11.1.2.1.0
  • Oracle Hyperion Foundation Services Release 11.1.2.1.0 Part 1 of 7
  • Oracle Hyperion Foundation Services Release 11.1.2.1.0 Part 5 of 7
  • Oracle Hyperion Enterprise Performance Management Architect, Fusion Edition Release 11.1.2.1.0


The documentation for the client is available here

The client is launched by running epma-batch-client either using interactive command line or a script.

 

Using the interactive command line is a good way to make sure you are using all the correct commands and set all the parameters before putting them into script.

My intention is to use a script as interactive command line is back to the manual world again.

There are a number of command line arguments that can be used with the client and if you call it with –H it will provide help on the available arguments.


The arguments I am interested in using are

-C is the filename of a script to be used e.g. -C"F:\Scripts\Import.txt"

-R is the filename to output the results of the script to e.g. -R"F:\Scripts\Import.log"

-U is the username to log into EPMA e.g. -Uadmin

-P is the password to log into EPMA e.g. –Ppassword

That is straightforward enough so moving on to the script; there are two important variables that are required to be set before using any client commands.

set bpmaserverurl=http://<servername>/hyperion-bpma-server;
set workspaceurl=http://<servername>:<port>/workspace;


Basically these system variables need to be set to tell the client the location of workspace and EPMA, if you don’t use them the script will fail with a message informing they are missing.
It is important to separate each command with a semi colon.

The next step in the script is to log into EPMA and as I am passing the username and password into the client I only need to use the following

Login;

Once logged in I want to import the metadata from the interface tables and push that into EPMA which is possible using the “Execute Import ..” command

The format of the command is

Execute Import
Parameters(importtype, profilename, filename, waitforcompletion)Values(‘’, ‘’, '’, ‘’);


importtype – set the type of import and the available values are
  • FlatFile
  • InterfaceTables

profilename—The name of an existing import profile

filename – If using Flat files as the import then it is the name of the import file.

waitforcompletion - If set to true, the Batch Client waits for the job to finish. If set to false, the Batch Client submits the job and continues.

Now this is where I was a bit confused because all through this series I have been using a Load ID with the interface tables, even when you manually import through EPMA web it requires a Load ID but it is nowhere to be seen in the command parameters, if I am not missing something obvious then how is it possible to set which Load ID to use and what does the client use as a default?

As I only have one Load ID at present I will just carry on and populate the parameters with known values, the importtype will be InterfaceTables and the profilename which I have been using in this series is SAMPLE_APP

Execute Import Parameters(importtype, profilename, filename, waitforcompletion)Values('InterfaceTables', 'SAMPLE_APP', '', 'true');


So the script to import from interface tables to EPMA is

Using command line the following can be run

E:\Oracle\Middleware\EPMSystem11R1\products\Foundation\BPMA\EPMABatchClient\epma-batch-client.bat -C"F:\Scripts\Import.txt" -R"F:\Scripts\Import.log" -Uadmin -Ppassword


 Which produces an output log of


The output log is exactly what is outputted to the command window, as you can see on one of the lines it completed with a code of 0 which means it was successful.

There is a list of return codes available here

If I update the script to put in an invalid profile name and the run script again the output would be


 When using a batch script to call the EPMA client it is possible to check the return codes and act on them, for instance using command line in windows you could use something like

IF ERRORLEVEL 0 goto COMPLETED_SUCCESS
IF ERRORLEVEL 407 goto IMPORT_FAILED



:COMPLETED_SUCCESS
do something....

:IMPORT_FAILED
do something....



 The output log also has a link to the job console where the status for import can be viewed.

Now that the metadata has been successfully loaded into EPMA then it is possible to deploy the application to planning and push the outline changes to essbase.

The commands available are “Execute Deployment” and “Execute Redeployment”.

The deployment command has the following format if being used with planning

Execute Deploy Parameters (ApplicationName, InstanceName, ApplicationServer, HubProject, ClearAll, CheckIntegrity, waitforcompletion, purgeTransactions,  deployOption, datasourceName,CreateOutline,RefreshOutline,CreateSecurityFilters,SharedMembersSecurityFilters,ValidateSecurityFilterLimit,Notes)
Values(….);


ApplicationName—The name of an existing application to deploy to.

InstanceName—Name of the instance to deploy to.

ApplicationServer—Name of the application server to deploy to.

HubProject—Shared Services project to use.

ClearAll— This is for HFM

CheckIntegrity—This is for HFM

PurgeTransactions—Purges the transaction history. A value of True will purge the history, and a value of False will not.

deployOption—Specifies the type of deployment that you want to perform, available values are
  • AppView—Deploy the application only. (This setting is the default.)
  • Rules—Deploy calculation rules.
  • All—Deploy calculation rules and the application.
WaitForCompletion—If set to True, the Batch Client waits for the job to finish. If set to False, the Batch Client submits the job and continues.

datasourceName—The data source to be used for the deployment

CreateOutline—Creates the Essbase outline, values : True or False

RefreshOutline—Refreshes the essbase database outlines, values: True or False

CreateSecurityFilters— Create essbase database filters based on access permissions in planning, values: True or False

SharedMembersSecurityFilters—Evaluates access permissions that are set for all instances of the member (base and shared) and applies the least restrictive access permission to them all, values: True or False

ValidateSecurityFilterLimit—Identifies security filters that exceed the Oracle Essbase security filter limit of 64 KB per row, values: True or False

Notes—Optionally stores any comments or notes that you may want to add regarding the deployment.

I find that the easiest way to find out what values to put against the parameters is to initiate a deployment of the application from EPMA.


The deployment window gives you a good indication on the values to populate the command with.

Execute Deploy Parameters(ApplicationName, InstanceName, ApplicationServer, HubProject, ClearAll, CheckIntegrity, waitforcompletion, purgeTransactions,  deployOption, datasourceName,CreateOutline,RefreshOutline,CreateSecurityFilters,SharedMembersSecurityFilters,ValidateSecurityFilterLimit,Notes) Values ('EPMASAMP', 'Default', 'Default', 'Planning', 'false', 'false', 'true', 'true', 'AppView', 'EPMASAMP','false','true','false','false','false','EPMASAMP deployment');
 

 This command can then be added to the existing script and executed.


 The deployment was successful and once again a link is provided to view the deployment status in the job console.


 If there any issues with the deployment then they will be logged.


A return code of 405 is generated which could be acted upon if the process is being scripted.

In the next part of this series I will look at using ODI to control the running of the batch client and provide relevant error handling steps, this will then be added to the package which was created in the last instalment to provide a full automated EPMA interface load and deploy process.

Saturday, 5 November 2011

Loading to EPMA planning applications using interface tables - Part 4

A bit later than planned here is the fourth instalment on using EPMA interface tables, at the end of the last part the import profile had been created and the tables are ready to be populated so today I will go through the process of using ODI to populate the interfaces tables.

Now you don’t have to use ODI as other methods could be used to populate the tables but I am going to use it as it offers such flexibility around transformations and automation in a controlled environment.

At the start of this mini-series I said I was going to try and approach it from the angle of a classic planning application and convert this into an EPMA world, basically I am going to take the classic planning sample application and transform this to be used in an EPMA model. I think the saying “a picture is worth a thousand words” is going to be very relevant for this part of the series as there are going to be lots of screenshots.

I am using this approach as it is more than likely you will be accustomed to the format required for a classic planning application and it does need transforming before it can loaded into EPMA so this is where ODI comes into its own, it also will provide an insight to the difference in member property naming conventions between classic and EPMA.

First of all I am going to extract the metadata from a classic sample application using the outline loader utility, I have covered using the utility to extract dimensions in a previous blog though it is worth noting this functionality is only available from 11.1.2.1.


Using the utility is very straight forward and only requires one command line per dimension, I extracted all the dimensions for completeness even though I am only interested in Account, Entity, Scenario, Version and Segments.


The files produced are in CSV format and can be opened in a text editor or Excel, if you do open the file in Excel watch out for member formulas as they do cause formatting issues.
The extract using the utility seems to add a leading space to every item after the first column, this will mean that in the ODI integration all the columns will need to be trimmed.

Now that all the source files are place we can look to ODI to transform and load the metadata to the interface tables.

I will go through the step by step process of achieving this in ODI though there is going to be an assumption you are familiar with the core principals of ODI.
I will be using Oracle as the technology for the interface tables but the process will be similar if you are going to use something like SQL server.

A brief description of the process will be define physical/logical information in the topology manager, create the models, design the interfaces to load from the flat files to the EPMA interface tables and then bring it all together with a package.


First the physical information needs to be defined for the flat files and the target EPMA tables.
A Physical Schema was added to the file technology.


The directory where all the source flat files are held is manually entered in the schema boxes.


The physical schema always needs to be mapped to a context and a logical schema, the context I am using is called DEMO and I give the logical schema a name of CLASSIC_FILE_EXTRACTS.


A new Data Server using Oracle technology is created, the username/password to connect to the interface tables are entered

 The JDBC connection information to the interface tables is added.


From the dropdowns the schema where the interface tables are held is selected, to make it nice and simple the schema is called INTERFACE.


Once again the DEMO context is selected a logical schema name of EPMA_INTERFACE is applied.
This is all that needs to be done in the topology so I can move on to the Designer.


A new model is created for the flat files.


 The Model is associated with the file technology and assigned the logical schema that was created in the topology navigator.


A Datastore is manually created for each of the flat files as it is not possible to reverse all the files in a Model in one step, the following steps are for the file holding the account metadata.


If the file is accessible by ODI then it is possible to browse to the file, if it is not then the filename would have to be manually entered.


The format of the file is defined by setting the heading lines to a value of 1, field separator of comma and a single quote text delimiter.


If the file can be accessed by the console then all the columns can be reversed, if the file cannot be accessed by ODI it is easier to temporarily place the file in a location accessible to ODI and the reverse instead of having to manually create all the column information.
I manually updated the lengths to 80 for member/alias columns and increased the Formula column.


The process is repeated for the other flat files that are going to be used to load metadata to the EPMA tables.


A new model is created to hold each the information for each of the EPMA tables.


Oracle technology is selected and assigned to the logical schema that was created earlier in the topology navigator.


In the reverse-engineering section the target EPMA tables that are going to be used in the integrations are selected, these tables were created in part two of this series.


 The tables and column definitions are automatically reversed.

Interfaces can now be built as the source and target Datastores have been defined.


I am just going to cover the Account interface as it is probably the most complex out of all of the dimension builds and gives a good overview.

If the source file did not need any transformation it would be just a case of adding the file to the source mapping area of the interface, adding the target EPMA interface table to the target then applying the one to one mapping and finishing off by setting the Knowledge Modules and options to be used.

This would have been the case if the classic planning member properties matched those of an EPMA enabled planning application, I have no idea why they don’t match as it seems like an obvious decision that would have been made when EPMA was being developed.

It is also worth noting that EPMA is very strict about the naming convention for the member properties and a slight difference such as using lower case instead of upper case for the first character can cause problems.


Above is the completed mappings from source to target and I will now go through each of the columns to explain the mappings.


In the second part of this series I went through the LOADID and this numerical value defines what records to load, as this value can change I created an ODI variable which can be applied to all the interfaces and be set at runtime.


All of the columns in the mappings use the TRIM function to remove the leading space in the source which is created by default using the outlineloader utility.

In the mapping a CASE statement is used to change the root member from Account to #root, this is required by EPMA and if it is not used then the root member will fail to load and will cause all of its descendants to fail due to missing parent.


CHILD, ALIAS and DESCRIPTION are a simple straight mapping.


The ISPRIMARY column requires either a value of 1 to define the member as a primary member or 0 to define as a shared member.

The datastorage column of the source file holds the information to whether the member is shared or not with the property value of shared.

The SORTORDER column I have left unmapped for the time being because the rows in the source file are in the correct order.


As the DATASTORAGE property will apply to each of the interfaces I created an ODI function to handle the transformation.


The classic properties for DataStorage are on the left and the EPMA ones are on the right and as you can see there is a slight variation.


The function takes a single input DataStorage value and converts it from classic to EPMA.
This function can simply be applied to the DataStorage target mapping for each of dimension load interfaces.
If you are new to ODI functions then it may be worth having a look at the article I wrote in the past about them.

 

I would usually create a function for converting the Data Type as it is used in all dimensions but this time I was just a little bit lazy.

The main differences between classic and EPMA were the initial character needed capitalizing so I used the INITCAP function.
non-currency needed converting to NonCurrency.

The biggest difference between versions is enumeration in classic and SmartList in EPMA, I do agree that SmartList is a much more user friendly naming convention and enumeration stems from the earlier days of planning.


There are a number of member properties in classic that are exported as true/false and in the EPMA interface world the table definition is set up a numerical so it requires a 1 for true and a 0 for false.


Instead of having to keep repeating mappings for the different columns that require this logic I once again created an ODI function to handle them.



Like most of the mappings there are a number of different ways to transform the data to get the same result with formula I removed the quotes from the beginning and end of the syntax and the replaced double quotes with single quotes.



SMARTLIST,UDA and PLAN1AGGREGATION were straight mappings where the source was trimmed.

MEMBERVALIDAFORPLAN1 was a conversion from true/false to 1/0 so I reused the CONVERT_BOOLEAN function again.

As there is only one plan type in the application the other columns did not require a mapping, it is dependent on how many plan types there are in an application to which target columns would need to be mapped.



The differences between classic and EPMA naming conventions are the weighted averages and capitalisation; the mapping could also have been achieved using a CASE statement instead of the combination of functions I used.



Skip Value is a strange one in classic it is zeros and in EPMA it is Zeroes, both are correct definitions but it is just typical they would differ.



Main difference is non-expense in classic and NonExpense in EPMA but a simple CASE statement takes care of that.



Very similar between classic and EPMA it is mainly just the first character that needs to be a capital in EPMA, once again the CASE statement could have been used instead of DECODE.




Only difference is the first character needs is capitalised in EPMA.


 In classic the source plan type is the actual name of the Plan Type but in EPMA it follows the format of Plan1, Plan2, Plan3, PlanCX, PlanWF


As the number of records to be loaded is small then I find it perfectly fine to use LKM File to SQL instead of the more heavyweight KM’s like “LKM File to Oracle (SQLLDR)”

The KM selected to load from the staging area to the interface table was “IKM SQL Control Append”, no need for flow control and I selected to truncate the table, if there was metadata in the table against a different load id then the truncate wouldn’t have been used and the records matching the current load id would need to be deleted first.

That completes the Account dimension load from a file to the account interface table and the next steps are to be build interfaces for the other dimensions, I am not going to through them as the logic is pretty much the same though I will show screenshots of each of the interfaces for completeness.

Entity



Scenario


Segments


Version

Now to put it all together a package can be used to step through each of the interfaces.



The first step sets the load ID to a value of 1, this could actually be done at run time and instead of setting the variable it could just be declared.


All the interfaces tables have successfully been populated but that is not the end just yet as the metadata has to be imported into EPMA.


For now I am going too manually import the dimensions and this is done from the EPMA dimension library.


From the Type drop down list “Interface Tables” are selected and the profile which created in the last part of this series.

The interface Load ID is also required and set to 1 as that is the number that has been used throughout this series.

 

So there we go all the dimensions are now populated from the interface tables, if there are any errors these will be logged in the job console.

The application would then need to be deployed to push down the information to planning and essbase.

In reality you wouldn’t want to manually run the import so in the next part of the series I will look at automating this process.