Monday, 20 November 2017

FDMEE/Data Management – common questions part 1

There have been a few reoccurring questions raised recently around FDMEE/Data Management so I thought I would collate some of these types of questions and put a few posts together to address them, this way it saves me having to repeat the same response and instead refer to these posts.

The first is question is:

Is it possible to have EPM cloud to cloud or hybrid integrations where the target application is the same name as one that has already been registered?

I have written about EPM cloud to cloud integrations and on-premise hybrid functionality in the past, the functionality works well until you try and add an application with the same name as one registered.


This has definitely caused problems because if you wanted to move data from say a test to production cloud instance where typically the application name would the same it wasn’t possible and rendered the cloud to cloud functionality useless.

It looks like Oracle finally recognised this design flaw and from the 17.10 EPM Cloud release it is now possible to register a duplicate application name, unfortunately this has not made it down to on-premise yet so we can only hope it arrives in the next patch set whenever that may be.

The duplicate target application name functionality in the cloud works against remote cloud connections or a local target application.

You may ask why would you want to add a duplicate local application, well I am going to go through an example of why you might want to do this which also addresses another question that I have seen raised quite often.

Is it possible to have the period and year in the rows of source data and not have to drive the data load by the period in the POV or start/end period?

I have already written about the functionality to load data when the period and year are contained in a source file, the problem with this functionality is that it is still driven by the period in the POV when importing from the workbench or by the start/end period when running a data load rule.

The way around this is to change the period/year dimensions to a generic class and then define a data table column name for them in the target application dimension details, the problem with this is that it would affect all the integrations and you couldn’t have a mixture of driving loads by the POV or driving from the period information in a source file, with the ability to add a duplicate application it allows you to have both methods operate in tandem.

So let me go through a simple example which uses the duplicate target application functionality with the ability to drive a data load from the period information contained in a source file.

First, I am going to add a new local target application, the concept would be the same if adding a cloud target application.


Before the 17.10 release you would select the application type and then application name.


From 17.10 there is option to define a prefix.


The prefix can be up to 8 characters long.


The application is added with the prefix.

In the original target application, the period dimension is assigned as a period dimension class and the year as a year dimension class, this means any of the existing integrations are driven by the POV.


The existing integrations will be unaffected by the duplicate application.

For the duplicate application, I set the Period dimension is set as a generic dimension class and the data table column assigned to UD4, the year is set as a generic dimension and this is set to UD5 in the data table column name.


Now a period mapping is created where the period name does not relate to a period or year and can be used in the POV.

At first, I created under the new duplicate target application.


After testing I found that the period is not picked up in the POV and the values are read for the original target application, I also found errors were generated when using period mappings for the duplicate application.

I removed the period mapping from the duplicate application and added one to the original application.


The target period month and year target are not important but are required for the mapping to validate.

The source csv file I am going to load contains both the periods and years I want the data to be loaded to.


On to the import format, the duplicate application name is selected as the target, the mappings include the period and year dimensions because they were defined as generic in the target application dimension details.




The only thing to note in the location definition except the import format is selected.


The POV is set to the new location and the period to “No Per/Yr” which was created earlier.


A new data load rule is created, the source csv file is uploaded and selected, the remaining properties were left as default meaning this rule would be a numeric data only type load method.


You will notice that the data load mappings also include the period and year dimensions, usually these dimensions would be mapped by the period mapping but as we are using a different method these are going to be controlled by the data load mappings.


For this simple example, all mapping except for the year dimension were created as like for like.


As the source file contains the year in the style of “YYYY” so in this example “2017” it needs to be mapped to the target planning application format of “FYYY”, for this I used a FORMAT mapping type which prefixes the target with “FY”, ignores the first two character in the source and includes the third and fourth characters.

Now to load the source data through the workbench.


The source data has been successfully imported and mapped, you can see that the year values have been transformed into the correct format.

The export is then executed to load the data from data management into the target application.


To confirm the data has been loaded correctly an ad hoc retrieve was performed in Smart View.


So now I am able to drive the data load from the period/year values in a source file and by using a duplicate application this allows the existing data loads to carry on being driven by the POV with integrations against the original target application.

I am going to leave it there for this part as I don’t want to overload with too much information, in the next part I will go through some common misunderstandings with the all data type load method and look at mapping options for the all data type.


Tuesday, 31 October 2017

FDMEE - building Essbase dimensions - Part 2

In the last part I covered a possible solution for building Essbase dimensions through FDMEE, it involved creating a custom application to allow metadata from a file to be loaded to FDMEE, this was then mapped and exported to a file, finally a jython event script was called and the dimension was built using a Essbase load rule with help from the JAVA API.

In this post, I am going to carry on where I left off and go through an example using the same concept but this time the source will be a relational database and the building of the Essbase dimension will be through a SQL load rule, this means there will be no file created in the export stage as the dimension will be built directly from the FDMEE repository.

If you have not read the first part then I recommend doing so as I am going to go straight into the solution without repeating what was previously covered.

The idea is load metadata from a relational source using the universal data adaptor which is available in FDMEE from 11.12.4.100, I am not going to go into any detail on setting up the adaptor as all the steps have been covered before by Francisco in a great three-part blog.

The source comprises of a view which queries two tables, the tables are in parent/child format hold member properties for a measures and market dimension, yes the Essbase target is going to be the only and only Sample Basic.


I have included two dimensions (it could be more) just to demonstrate the possibility of loading multiple dimensions at once using an Essbase incremental dimension build.

The view does not contain all member properties as it only there to provide an example and not an exhaustive list, the properties are already in the correct format for Essbase but it doesn’t have to be that way as the mapping stage in FDMEE could easily handle it if transformations are required.

The view is included as part of the setup of the source adaptor in FDMEE.


Once the table definition has been imported the columns from the view are populated in the adaptor configuration.


The classification has been left blank for all columns as it is not required for this solution.

Now the source has been defined the target needs creating which is done once again using a custom application.


Just like the example in the previous part, one of the properties needs to be assigned a target class of Account for the solution to work, it is not important which one and the remaining can be set a generic.

On to the import format where the source is defined as the UDA and the target the custom application.


The source columns from the view are then mapped to the target application.


You will notice I have formula mapped to “Description 1”, this is because the UDx column sizes in the FDMEE database tables are set as 80 characters and the description column is larger in size so I have used that. The amount column has not been mapped as this is not required and is going to be ignored.

There is nothing to really to note about the location as it is just the default values with the import format selected.


There are a few things to point out with the data load rule, first “Zero Balances” has been set to “Include”, this is because the source does not have any data values so if this was set to “Exclude” then none of the metadata will be loaded to FDMEE.


In the target options, the property that defines whether to export to file has been set to no, this is because there is no requirement for a file to be created as the dimensions are going to be built by directly querying an existing FDMEE view using Essbase SQL load rules.


In the custom options tab, there are a couple of values that have been defined, one for the target Essbase application/database name and the name of a substitution variable which will hold the FDMEE load id, this will become clearer later.


The majority of the data load mapping are set as like for like mapping as in this example there is no need for much transformation, though because some of the member properties are null I added a SQL mapping to convert them to blank space, this is basically to make the validation stage successful and for visual aesthetics in the workbench, the spaces will be removed in the Essbase SQL load rules.


On to running the rule from the workbench.


With the help of the UDA and database view, the members and properties for both the Market and Measures dimensions have successfully been loaded to FDMEE.

Now that the members and properties have been mapped the next step is to build the Essbase dimensions using SQL load rules.

Before creating the load rule you need the SQL statement to retrieve the members and properties, luckily there are default views in the FDMEE database which make this easy.

There is a view called “AIF_HS_BALANCES” which returns mapped data for the latest FDMEE process ID, I would have used this view but it does not include the description column which I need because I am storing the member formula in it. There is a more detailed view available named “AIF_HS_BALANCES_DETAIL_V” which does include the column.

I created two load rules, one for each dimension using the parent/child build method.

The SQL in the rules is a simple select from the “AIF_HS_BALANCES_DETAIL_V” view, an example in the measures rule is:


The trim function is to remove the blank spaces that were created as part of the mapping in FDMEE, the load ID (process ID) is read from an Essbase substitution variable.


The value of this variable will be set in a jython event script that will also run the Essbase load rules.

The finished version of the load rule has all the columns returned from the SQL query mapped to the relevant dimension build properties.


The rule is created with the naming convention of the first four characters of the dimension name and suffixed with SQL, this is so the rule name can be identified when building the dimension.

On to the final piece of the solution and that is the jython event script which will handle building the dimensions.

In the process FDMEE log there is reference to jython event scripts that are called at different stages throughout the export process.

For example:

INFO  [AIF]: Executing the following script: <application_root_folder>/data/scripts/event/AftCheck.py

As stated in the previous blog post the scripts are not there by default so you may get a warning saying the script does not exist, if they don’t exist it is just a matter of creating the script in the event script directory and it will be executed next time the process is run.

I am going to use the above event script to carry out the dimension builds using the Essbase Java API.

As usual I am not going to include and go through every line of code as this is just an example of what can be achieved and it is really up to you to write the code to fit your requirement.

The basic idea of the script is to update the Essbase substitution variable with the current load ID, this is so the SQL query in the load rule will retrieve the correct dataset.

An incremental dimension build will be run for each dimension that has been loaded to FDMEE, an incremental build is being used for efficiency so that all the dimensions can be built before restructuring the database.

The script does not contain any hardcoded passwords as the connection to Essbase is made using a generated SSO token and the credentials for the SQL load rule to connect to the FDMEE database are retrieved using the FDM API.

So let us break down the script into manageable chunks.


The above section imports the required Java classes, the majority of these are for the Essbase JAVA API and the classes are accessible by default from FDMEE.

There is a SQL query defined against the FDMEE view which I mentioned earlier, this query will return each dimension to run a build for based on the current load ID.


The rest of the above section should be self-explanatory as comments are included.


The next section executes the query to return each dimension and stores the results in a list.

The custom options in the FDMEE load rule are retrieved and stored, the first option holds the Essbase target application/database and the second the name of the substitution variable which stores the current load ID.

A login to the Essbase server is made and the substitution variable is updated with the current load ID.


The final section starts the incremental dimension build and cycles through each dimension that requires a dim build.

A custom function is called which adds some additionally logging to the process steps log.

The Essbase load rule name is generated from the first four characters of the dimension name and suffixed with “SQL”

An incremental dimension build is then carried out by calling the Essbase data load for the current dimension.

If any errors exist these are written to the current process log and the dimension build error log is included as a download option in process details.

A custom function is called to update the process step with a warning or success status.

Once all the dimensions have been built the incremental build is ended meaning the database is restructured.

Before running the export step let us look at the database outline, the Measures and Market dimension are currently empty.


Run the export step.


The custom message indicates the dimension build was successful, process details also confirms each dimension build step was successful.


A check back at the database outline and all looks good.


To demonstrate what happens with build errors I added an invalid record to the source database table and ran the import again.


The record is trying to load a parent to the wrong dimension which should generate an error.

After running the export again, process details shows a warning and the process steps show a warning for the market dimension build.



The dimension build error file can be download from process details under “Output File”.


The errors are also contained in the main process log.


If you combine this dimension build method with a standard FDMEE data load functionality you have a full solution for loading metadata and data to a target Essbase database.

I am not saying you must go about it in this way and just having a custom jython script to build Essbase dimensions could be perfectly acceptable, I wanted to include as much as I could in this solution to demonstrate what is achievable.

Until next time..

Sunday, 15 October 2017

FDMEE - building Essbase dimensions - Part 1

As you are probably aware FDMEE is great at processing and loading data but not so good when it comes to metadata, currently the only way to load metadata without customisation is with a supported ERP source system and even then, the functionality is pretty limited.

In the past I wrote about a way to handle Planning metadata through FDMEE using a custom jython script, so I thought it was time to turn to Essbase and look at a potential solution to building dimensions.

In the last post I demonstrated how easy it is in FDMEE to interact with the Essbase Java API using jython, continuing that theme the method I will go through in this post will also use the Java API.

I am going to take a different approach than I did with loading Planning metadata where it was all controlled by a custom script, this time I am going to create a custom application which will allow the metadata to be loaded into FDMEE before loading to an Essbase database.

In summary, the process flow will be to load a text file containing the Essbase dimension information to FDMEE, map the metadata, export to a text file and then build the dimension using an Essbase load rule.

As usual I am going to try and keep it as simple as possible, the aim here is not to provide a complex step by step guide but to plant ideas and then the rest is up to you.

So let us get on with it, I have put together a source comma separated file which is in parent/child format, the idea is to load it to FDMEE, map, export and then perform a dimension build to the existing product dimension in everybody’s favourite Sample Basic database.


In FDMEE a new custom target application is created.


New dimensions are added to the custom application to match the source file, I understand that in this scenario they are not dimension names and they are dimension build properties but usually you would be loading data by dimension to a target application, as this is a custom application and solution the concept of dimension can be ignored and thought more of as a property.


The source file is in the format of parent member, child member, alias, data storage and consolidation operator so the dimensions are added to reflect this, if there were additional columns in the source file they could easily be added into the custom application, even if there are properties that are required in the dimension build that are not in the source file they could be generated in FDMEE.

One of the properties needs to be assigned a target class of Account for the solution to work, it is not important which one and the remaining can be set a generic.

It is important to note that when working with a custom application the order of the data that is written to the output file will be defined by the order of “Data Table Column Name”, the order is defined as Account, Entity, UD1, UD2 to UD20, AMOUNT.

So in my example the output file will be in the order of ACCOUNT, UD1, UD2, UD3, UD4 which maps to Parent, Child, Alias, DataStorage, Consolidation.

On to the import format, the source is set as a comma delimited file and the target is the custom application that has just been created.


The source columns and column number from the file are mapped to the target, you will notice that there is a target amount column which is added by default, I am not interested in this target column and it is not present in the source but it needs to exist, I just map the source field to 1 and the value to 1 which will become apparent later.

There is nothing to really to say about the location as it is just the default values with the import format is selected.


A new data load rule is created, the import format and source file name are selected and I uploaded the source file to the FDMEE inbox.


In the rule target options the property value has been set to enable the export to a file and the column delimiter will be comma, the export file is required as this will be then used for the dimension build using an Essbase load rule.


In the custom options for the rule I have added some integration options, they basically define the Essbase application, database, dimension and rule name, it will be clearer how they are used later when I go through the jython script.


I have kept the data load mappings extremely simple and in the main they are like for like mappings, though this is where you could get as complex and creative as you like depending how your source file differs from the target dimension build file.


I did add explicit mappings for the data storage member property as the source file contains a more meaningful name than the property values required for an Essbase dimension build.


The Essbase administrator documentation has a table containing all the property codes and the description.

At this point I can run a data load to import the source file, map and then export.


From the workbench, you can see the full import to export process has been successful.

The source to target mappings can be viewed and you will also notice there is an amount column which I fixed to a value of 1 back in the import format.

The output data file name will be generated based on <target_application_name>_<process_id>.dat and will be written to <application_root_folder>\outbox directory.


The output file is ready for a dimension build using an Essbase load rule.


I am not going to go through the process of how to build a load rule in the EAS console but here is the completed version.


As the file has a header record this has been set to be skipped in the load rule, the amount column has been ignored in the field properties of the rule.

The rule is named the same as the integration option value which was defined earlier in the FDMEE load rule.

The dimension could now be built using the rule and file but we are going to get FDMEE to do that using a jython script.

If you look in the FDMEE log for the process that was just executed you will see reference to jython event scripts that are called at different stages throughout the process.

For example, after the export file has been created there will be the following in the log

INFO  [AIF]: Executing the following script: <application_root_folder>/data/scripts/event/AftExportToDat.py

The scripts are not there by default so you may get a warning saying the script does not exist, if they don’t exist it is just a matter of creating the script and it will be executed next time the process is run.

Please be aware that if event scripts have been enable and the script exists it will always be executed so you need to code it so it triggers only the section of the script you are interested in for this process.

I am going to use the above event script to carry out the dimension build using the Essbase Java API.

Now I am not going to go through every single line of the jython script I have wrote and only stick to the important sections, the script does contain comments so hopefully it provides you enough information.

In summary, the Essbase classes that are required to perform a dimension build are imported.

The target application name and process ID are stored in variables.

The values from the integration options in the FDMEE load rule are stored using the API method “getRuleDetails”, these are held in “RULE_ATTRx

The target Essbase application and database name are then generated from the retrieved values.

The full path to the exported text file and dimension build error file are generated.


The next section is where the Essbase JAVA API comes into play, a login to the Essbase server is made using a single sign-on token so no clear text passwords are stored.

A custom function is called which adds some additionally logging to the process logs which I will show later, it is not actually necessary to do this.

The dimension build is run using the “buildDimension” method passing in the stored rule name, load and error file.

If an error file is generated it is read and the errors are added to the process log.


Now that the jython is in place the export stage of the FDMEE load rule can be run again.


The process details confirm that the export and dimension build were successful, the dimension build file can also be downloaded.


The process steps include the additional custom logging I was referring to earlier.


Opening the outline in the EAS console shows the new members and properties have been successfully created in the product dimension.


Let me demonstrate what happens when dimension build errors occur.


This time I have added an invalid record to the source file which is highlighted above, the full data load process is then executed again.

Instead of a green tick, process details displays a warning icon which was generated using the custom logging function in the jython event script.


The process log contains the full location to the dimension build error file and includes the rejections in the log.


Now we have the option to load data and metadata to a target Essbase database.

You don’t have to use the custom application method, if the source file does not need any kind of mapping or require visibility of what is being loaded through the dimension build, then it could all be done with a single FDMEE custom script which would be practically along the same lines as the code in the event script.

So what if your source is not a file and is a relational database, what if you want to be able to run incremental builds for multiple dimensions, what if you don’t want to create an export file and instead have an Essbase SQL dimension build? Well look out for part 2 where all of these questions will be answered.