Friday, 26 May 2017

FDMEE/Data Management – All data types in Planning File Format – Part 2

Moving swiftly on to the second part where I am going to look at the “all data types in planning file format” load type in Data Management, the functionality is basically the same as what I covered in the last part with the on-premise Outline Load Utility but has now been built into Data Management, I am hoping that by providing the examples in the last post it will make more sense when setting up the integration in Data Management.

Currently the functionality only exists in EPM Cloud but I would expect this functionality to be pushed down to on-premise FDMEE possibly when 11.1.2.4.220 is released, I will update this post once it is available.

Once again I will start out with the same employee benefits source data but this time the file can be kept much simpler as Data Management will handle the rest.


Just like with on-premise the data load settings need to be applied and these can be accessed through the navigator under integration.


It is a shame that these settings cannot be dynamically generated or be defined in Data Management instead of having to set them in planning.

On to Data Management and creating the import format, the file type is set to “Multi Column – All Data Type”.


In the import format mappings, I have basically fixed the members to load to by entering them into the expression field, this replicates the example in the last part using the OLU and fixing the POV in the file.


For the account dimension I could have entered any value in the expression field as it will be mapped using the line item flag in the data load mappings.

The Data dimension will be defined by selecting add expression and choosing Driver, I explained this method in detail in a previous blog on loading non-numeric data.


Basically the driver dimension is selected which in my example is the Property dimension, the first row is the header row and contains the driver members and the data is across five columns.


The mapping expression window provides examples if you are unsure of the format and the expression field will be updated with the values entered.


The data rule is created in the same way as any other integration.



The difference comes when setting up the target options, the load method this time will be “All data types in Planning File Format”.

There are also properties to define the Data load and Driver dimensions, these will match what has been set in the Data Load Settings in planning.


Seems a bit excessive having to set the driver dimension in planning, in the import format and in the data load rule, it would be nice if all these settings could be applied in one place in Data Management.

There is only one difference with data load mappings and that is for the data load dimension the LINEITEM format must be used.


The target value will need to be manually entered with the data load dimension parent member but after going through my example with the OLU it should be clearer to why it is required.

On to the data load and the data columns in the file will be converted into rows in the workbench.


In my source file there are five columns and four rows of data so a total of twenty records are displayed in the workbench.

The final step is export and load the data in to the planning application.


All good but a quick check of the data form in planning and something is not right.


Only the equivalent of one row of data has been loaded and the data that has been loaded is not correct.

The process log confirms that the outline load utility is definitely being used to load the data just like with the earlier example I went through, though in this case only one row has been processed and loaded.

13:10:36 UTC 2017]Outline data store load process finished. 1 data record was read, 1 data record was processed, 1 was accepted for loading (verify actual load with Essbase log files), 0 were rejected.

13:10:36,825 INFO  [AIF]: Number of rows loaded: 1, Number of rows rejected: 0


I checked the file that Data Management had generated before loading with the OLU and even though the format is correct there was only one record of incorrect data in the file.


The file should have been generated like:


The file is generated by converting rows into columns by using the Oracle database pivot query and outputting driver members and values as XML.

13:10:29,528 DEBUG [AIF]: SELECT * FROM ( SELECT ACCOUNT,UD4,DATA,'"'||ENTITY||','||UD1||','||UD2||','||UD3||','||SCENARIO||','||YEAR||','||PERIOD||'"' "Point-of-View"
                      ,'Plan1' "Data Load Cube Name" FROM AIF_HS_BALANCES WHERE LOADID = 598 )
PIVOT XML( MAX(DATA) FOR (UD4) IN (SELECT UD4 FROM AIF_HS_BALANCES WHERE LOADID = 598) )

I replicated the data load in on-premise FDMEE, ran the same SQL query and only one row was returned.


The query returns the driver members and values as XML which then must be converted into columns when generating the output file.


At this point I thought it might be a bug but thanks to Francisco for helping keep my sanity, I was missing a vital link which was not documented, I am sure the documentation will get updated at some point to include the missing information.

If you have records that are against the same POV then you need a way of making the data unique so that when the SQL query is run all rows are returned, this is achieved by adding a lookup dimension and identifying a driver member that will make the data unique.

If you take the data set I am loading the driver member “Grade” values are unique so this can be defined as a lookup dimension.

To do this you first add a new lookup dimension to the target application.


The lookup dimension name must start with “LineItemKey” and depending on the data that is being loaded you may need multiple lookup dimensions to make the records unique.

Next in the import format mappings the dimension should be mapped to a column containing the driver member.


The “Grade” member is in the first column in my source file so I map the lookup dimension to that.

After adding a like for like data load mapping for the lookup dimension the full load process can be run again.


The workbench includes the lookup dimension and is mapped to the driver member Grade.
The SQL statement to generate the file now includes the lookup dimension which was defined as column UD5 in the target application dimension details.

17:16:36,836 DEBUG [AIF]: SELECT * FROM ( SELECT ACCOUNT,UD4,DATA,'"'||ENTITY||','||UD1||','||UD2||','||UD3||','||SCENARIO||','||YEAR||','||PERIOD||'"' "Point-of-View"
                      ,'Plan1' "Data Load Cube Name" ,UD5 FROM AIF_HS_BALANCES WHERE LOADID = 634 )
PIVOT XML( MAX(DATA) FOR (UD4) IN (SELECT UD4 FROM AIF_HS_BALANCES WHERE LOADID = 634) )

17:16:36,980 INFO  [AIF]: Data file creation complete


Once again I replicated in on-premise and the query correctly returns four records.


Even though the query results include the lookup dimension this will be excluded when the output file is created.


This time the process log shows that four records have been loaded using the OLU.

17:16:49 UTC 2017]Outline data store load process finished. 4 data records were read, 4 data records were processed, 4 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

17:16:49,266 INFO  [AIF]: Number of rows loaded: 4, Number of rows rejected: 0


The planning form also confirms the data has been successfully loaded and is correct.


Now that I have the integration working I can test out the rest of the functionality, I am going to load a new set of data but where data already exists for the unique identifier driver members


The unique identifier members are “Grade” and “Benefit Type”, data already exists under “Total Benefits” for “Grade1” and “Health Insurance” so the data being loaded should replace the existing data.


The data has been overwritten as the value for Active has been changed from “Yes” to “No”

Now let us load a new set of data where there is no matching data for the unique identifiers.


Before the load there was no data for “Grade 3” so the data should be loaded to the next available child member of “Total Benefits” where no data exists for the given POV.


The data has been loaded against next available member which is “Benefit 5” as no data previously existed for the given POV.

Next to test what happens when loading a data set with no matching driver member identifiers now that all child members of the data load dimension parent are already populated.


The export fails and the process log contains the same error as shown when testing the OLU as in the last post.

13:21:07 UTC 2017]com.hyperion.planning.HspRuntimeException: There is no uniquely identifying child member available for this member defined in Data Load Dimension Parent. Add more child members if needed.

13:21:07 UTC 2017]Outline data store load process finished. 1 data record was read, 1 data record was processed, 0 were accepted for loading (verify actual load with Essbase log files), 1 was rejected.


As the log suggests in order for the export to succeed additional members would need to be added under the data load dimension parent.

Since adding the lookup dimension all the data values have been unique for the “Grade” member so there have been no problems, if I try and load a new set of data where the values are no longer unique you can probably imagine what is going to happen.


The above data set contains “Grade 1” twice so now the lookup dimension is not unique and even though the load is successful we are back to where we were earlier with one record of incorrect data being loaded.


This means another lookup dimension is required to make the data unique again so I added a new lookup dimension, mapped it to the “Benefit Type” column in the import format, created a new data load mapping for the new dimension and ran the process again.


In the workbench, there are now two lookup dimensions present which should make the data unique when creating the export file.


Much better, the data loaded to planning is as expected.

In the whole the functionality in Data Management acts in the same way as when using the on-premise Outline Load Utility, I do feel the setup process could be made slicker and you really need to understand the data as if you don’t define the lookup dimensions to handle the uniqueness correctly you could end up with invalid data being loaded to planning.

Sunday, 21 May 2017

FDMEE/Data Management – All data types in Planning File Format – Part 1

Recently a new load method was added to Data Management in EPM Cloud, there was no mention of it in the announcements and new features monthly updates document so I thought I would put together a post to look at the functionality.


The new load method is called “All data types in Planning File Format” which may be new for Data Management but the core functionality has been available in the Outline Load Utility since 11.1.2.0 of on-premise planning.

The cloud documentation provides the following information:

“You can include line item detail using a LINEITEM flag in the data load file to perform incremental data loads for a child of the data load dimension based on unique driver dimension identifiers to a Oracle Hyperion Planning application. This load method specifies that data should be overwritten if a row with the specified unique identifiers already exists on the form. If the row does not exist, data is entered as long as enough child members exist under the data load dimension parent member.”

I must admit that in the past when I first read the same information in the planning documentation it wasn't clear to me how the functionality worked.

It looks like the above statement in the cloud documentation has been copied from on-premise and is a little misleading as in Data Management you don’t have to include the flag in the source file because it can be handled by data load mappings.

Before jumping into the cloud I thought it was worth covering an example with the on-premise Outline Load Utility because behind the scenes Data Management will be using the OLU.

As usual I am going to try and keep it as simple as possible and in my example I am going to load the following set of employee benefits data.


Using the LINEITEM flag method with the OLU it is possible to load the data to child members of a defined parent without having to include each member in the file, so say you need to load data to placeholders this method should make it much simpler.

You can also define unique identifiers for the data so in the above example I am going to set the identifiers as Grade and Benefit Type, this means if there is data in the source file which matches data in the planning application against both the identifiers the data will be overwritten, if not the data will be loaded against the next available child member where no data exists for the given point of view.

It should hopefully become clearer after going through the example.

I have the following placeholder members in the Account dimension where the data will be loaded to, the Account dimension will be set as the data load dimension and the member “Total Benefits” will be set as the parent in the LINEITEM flag.


The data in the source file will be loaded against the following matching members in the Property dimension, these will be defined as the driver members.


The members are a combination of Smart List, Date and numeric data types.

I created a form to display the data after it has been loaded.


Before creating the source file, there are data load settings that need to be defined within Data Load Administration in the planning application.


The Data Load Dimension is set as Account and the parent member where the data will be loaded to is set as “Total Benefits”

The Driver Dimension is set as Property and the members that match the source data are defined as Benefit Type, Grade, Start Date, Active and Value.

The Unique Identifiers in the property dimension are defined as Benefit Type and Grade.

Now on to creating the source file, if you have ever used the OLU to load data you will know that the source file will need to include the data load dimension member which in this case will the line item flag, driver members, cube name and the point of view containing the remaining members to load the data to.

The format for the line item flag is:

<LINEITEM(“Data Load Dimension Parent Member”)>

So based on the data set that was shown earlier the source file would look something like:


You may ask why does the line item flag need to be on every record when it could just be included in the parameters when calling the OLU, this would make sense if loading data to children of only one member but it is possible to load to multiple members so it needs to be included in the source file.

The final step is to load the data file using the OLU and the parameters are the same as loading any type of data file.


The parameter definitions are available in the documentation but in summary:

/A: = Application name
/U: = Planning application administrator username
/D: = Data load dimension
/M: = Generate data load fields from header record in file.
/I: = Source file
/L: = Log file
/X: = Error file

You could also include the -f: parameter to set the location of an encrypted password file to remove the requirement of entering the password manually at runtime.

After running the script the output log should confirm the status of the data load.

Planning Outline data store load process finished. 4 data records were read, 4 data records were processed, 4 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

In my example four records were successfully loaded which is what I was hoping for.

Opening the form I created earlier confirms the data has been loaded correctly.


As no data previously existed for the POV the data was loaded to the first four children of “Total Benefits” and the unique identifier members would not apply in this case.

Let us load a record of data for the same POV and to matching unique identifiers, the unique identifier has been defined as a combination of members Grade and Benefit Type 


As matching data values already exist for “Grade 1” and “Health Insurance” under “Total Benefits”, this means the data should be updated instead of data being loaded to the next available child member.


The data has been updated where the identifier data values match and in this case the Active member data has changed from Yes to No.

Now let us load a new record of data where data values don’t match for the identifier members.


In the above example there is currently no matching data values of “Grade 3” and “Health Insurance” so the data should be loaded to the next available child member of “Total Benefits” where no data exists for that POV.


The data has been loaded against next available member which is “Benefit 5” as no data previously existed for the given POV.

So what happens when you try to load data and there are no available members left.


All five child members of “Total Benefits” have data against the above POV and as there is no matching data for the unique identifier combination the load fails with the following messages.

There is no uniquely identifying child member available for this member defined in Data Load Dimension Parent. Add more child members if needed.: 
,Plan1,"Jan,No Year,Forecast,Working,110,P_000",Grade 3,Car Allowance,01-05-2017,Yes,18000

Outline data store load process finished. 1 data record was read, 1 data record was processed, 0 were accepted for loading (verify actual load with Essbase log files), 1 was rejected.


At least the log provides exactly what the issue is and how to resolve.

I am going to leave it there for this post and in the next part I will look at how the same functionality has been built into FDMEE/Data Management and go through similar examples.

Sunday, 30 April 2017

FDMEE – diving into the Essbase and Planning security mystery – Update

I wanted to provide an update to the last post I wrote on the mystery around security with on-premise FDMEE and Data Management in the cloud, in the post I went through how data is loaded when the load method is set to “All data types with security” in Data Management, I was interested in understanding the limitations and performance when using this method so I thought it was worth putting a post together on the subject.

If you have not read the previous post I recommend you do so as I will be assuming you understand the background to this topic, the posts are available at:

FDMEE – diving into the Essbase and Planning security mystery – Part 1
FDMEE – diving into the Essbase and Planning security mystery – Part 2

Just to recap if you select the load method “All data types with security” with a non-administrator and are loading to a target BSO cube then the REST API will come into play to load the data, this is only currently available in EPM Cloud and on-premise FDMEE will still load using the Outline Load utility as an administrator.

On a side note it is now possible to set the load method in data load rule instead of just at the target application level.


You will also notice that another new type of method has been snuck in which is “All data types in Planning File Format”, I didn’t see this mentioned in the announcements and new features document but I am going to cover this in a future post.

At the moment there is a slight bug with setting the load methods in the data rule.

When you create a rule the default option is “Numeric Data Only”, say you change this to “All data types with security”


Now if you want to change it back to “Numeric Data Only” it is not possible because it is not in the list of values.


If you manually type “Numeric Data Only” and try and save the rule then you will be hit with an error message.


I am sure it will be fixed at some point in the future and probably without being informed that it has been.

Anyway, back to the main topic of this post and if you load data as a non-administrator with the “All data types with security” method the REST resource that is being called is “importdataslice”.

The documentation provides the following details on the REST resource:

“Can be used to import data given a JSON data grid with a point of view, columns, and one or more data rows. Data will be imported only for cells that the user has read-write access to. Imports data of types Text, Date and Smart List along with numeric data. Returns JSON with details on the number of cells that were accepted, the number of cells that were rejected, and the first 100 cells that were rejected.”

The URL format for the REST resource is:

https://<cloudinstance>/HyperionPlanning/rest/v3/applications/<app>/plantypes/<cube>/importdataslice

As explained in the previous post the user’s security will be honoured with this method of loading data.

When exporting data from Data Management to the target planning application cube a file is created, the file contains JSON and a grid based on the data which is being loaded, this file is then read and forms the body of the post to the “importdataslice” REST resource.

Say I have a cube which has seven dimensions and I am loading the following data set.


Data Management will create the following JSON which contains a grid based on the above data.


The scenario member will be in the column and the rows will contain members for the remaining dimensions.

In terms of planning this will be the equivalent of the following form:


So what about restrictions and performance, well this is the cloud so there are restrictions and if you have ever built a large form you will know it has performance implications.

In the cloud if you try to build a form which has more than 500,000 cells when you open the form you will receive an error message.


The threshold seems to be the total number of cells in the data grid and not just data cells, as the REST resource is built on top a data grid then the following threshold should also apply to Data Management.

Now 500,000 cells may sound like a lot but with Data Management you will be used to loading data in rows and not thinking about total cell counts.

The total possible number of rows of data will also depend on how dimensions are in the cube you are loading data to.

I calculated that with the cube I am loading data to it should be theoretically possible to load to 71,426 rows before hitting the threshold limit.

Just to make sure I added an extra row of data to see what the output from Data Management would be.


The export process failed and checking the log confirmed the reason behind it.

11:38:02,593 DEBUG [AIF]: Overrode info.loadMethod for the non-admin user: REST
11:38:02,688 DEBUG [AIF]: EssbaseService.performLoadRest - START

11:38:02,728 DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/REF/plantypes/Plan1/importdataslice
11:56:41,651 ERROR [AIF]: The rest service request has failed: 400 Bad Request - {"detail":"Unable to load the data entry form as the number of data entry cells exceeded the threshold.\n\nCriteria: Number of cells\nError Threshold: 500000\nWarning Threshold: 250000\nCurrent Value: 500003","status":400,"message":"com.hyperion.planning.governor.HspGovernorThresholdException: Unable to load the data entry form as the number of data entry cells exceeded the threshold.\n\nCriteria: Number of cells\nError Threshold: 500000\nWarning Threshold: 250000\nCurrent Value: 500003","localizedMessage":"com.hyperion.planning.governor.HspGovernorThresholdException: Unable to load the data entry form as the number of data entry cells exceeded the threshold.\n\nCriteria: Number of cells\nError Threshold: 500000\nWarning Threshold: 250000\nCurrent Value: 500003"}

11:56:41,655 INFO  [AIF]: EssbaseService.loadData - END (false)

As expected the threshold limit error appears in the JSON response from the REST resource, what is concerning is that the error was not generated straight away, if you open a planning form the threshold error message will be generated straight away, with the REST resource it took 18 minutes before erroring out, you also get an idea of the performance implications of using the REST resource, 18 minutes to load 71,427 rows of data is certainly not performant and just to be clear that no data is loaded if the threshold limit is hit.

Now that I know the limit is the same as with data forms I can reduce the number of records.


This time the number of records should equate to just less than the 500,000 cell limit.


Failed again, time to look in the process logs.

12:40:25,072 DEBUG [AIF]: EssbaseService.performLoadRest – START
12:40:25,103 DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/REF/plantypes/Plan1/importdataslice

12:58:48,362 ERROR [AIF]: The rest service request has failed: 400 Bad Request - {"detail":"The form RestDataGrid_1493383281849 is very large and could take considerable time to load. Do you wish to continue?","status":400,"message":"com.hyperion.planning.FormWarningException: The form RestDataGrid_1493383281849 is very large and could take considerable time to load. Do you wish to continue?","localizedMessage":"com.hyperion.planning.FormWarningException: The form RestDataGrid_1493383281849 is very large and could take considerable time to load. Do you wish to continue?"}

This time the error seems a little ridiculous because it looks like a generic error related to opening large forms and has little relevance to the data load I am trying to perform, If I open a data form in planning which is just below the threshold the same message is not generated.

I tried reducing the number of rows by 10,000 and then another 10,000 but still the same error was being generated, then I remembered seeing the error message when opening large forms in planning before version 11.1.2.2

I went back to trying to open a large form in 11.1.2.1 and this confirmed it is the same message.


So it looks like the REST resource is using the same data grid functionality that existed in older versions of planning.

In the versions of planning where this message was generated it was possible to set a threshold in the display options of the application.


If you look at the display options in the cloud or on-premise versions from 11.1.2.2 then the setting is not available.


I had a look around for the setting in the UI but unless I have missed it hidden somewhere I couldn’t find where to set it.

Before looking any further I wanted to be sure that it was the same setting and in 11.1.2.1 where the default value was 5000.

I thought the setting meant total cells in the grid but from testing it looks like it is the number of data entry cells, I tested by loading 5000 records from Data Management.


This time the export was successful.


The process log shows that the REST resource was successful and confirms the number of rows of data that were loaded.

14:50:22,865 DEBUG [AIF]: EssbaseService.performLoadRest - START
14:50:22,869 DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/REF/plantypes/Plan1/importdataslice
14:51:07,384 INFO  [AIF]: Number of rows loaded: 5000, Number of rows rejected: 0

I tested again with 5001 records and the load failed.

2017-04-28 15:01:07,458 DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/REF/plantypes/Plan1/importdataslice
2017-04-28 15:01:47,192 ERROR [AIF]: The rest service request has failed: 400 Bad Request - {"detail":"The form RestDataGrid_1493391668666 is very large and could take considerable time to load. Do you wish to continue?

So it looks like it is picking up the setting of 5000 that used to be around in planning in the past.

I then went to see if I could find the setting and ended up exporting the application settings using migration in the planning UI.


This exports an XML file named “Application Setting.xml” and in that file there is a reference to a form warning setting with a value of 5000.


This looks promising so I updated the value to one that would be higher than the number of records that would be loaded through Data Management.


I Imported the application settings back into the application using migration.

Now to try the data load again with a total number of records just less than the main threshold.


Finally, I am able to successfully load the data.


The process log confirms the total number of rows loaded.

INFO  [AIF]: Number of rows loaded: 71426, Number of rows rejected: 0

Well what about performance, let us compare the timings for the different load methods with the same amount of records as above.

Setting the load method to “Numeric Data Only” which means a data load file is produced and then loaded using an Essbase data load rule.

20:38:45,064 INFO  [AIF]: Loading data into cube using data file...
20:38:45,936 INFO  [AIF]: The data has been loaded by the rule file.

The data load time was just under a second.

Now for “All data types with security” with an administrator which means the Outline Load utility be the method for loading.

20:42:40 Successfully located and opened input file.
20:42:49 Outline data store load process finished. 71426 data records were read, 71427 data records were processed, 71427 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

This data load took a similar time of just under a second.

Time for the REST method with a non-administrator.

20:54:46,385 DEBUG [AIF]: EssbaseService.performLoadRest - START
21:14:43,146 INFO  [AIF]: Number of rows loaded: 71426, Number of rows rejected: 0

A data load time of just under 20 minutes.

So using the REST method takes a massive amount of time compared to the other two methods, as this is EPM cloud it is not possible to see the system resources being used by this method but I can imagine it is heavy compared to the other two.

If you are thinking of using this method to load data then consider the limitations and performance impact it has, it is probably only advisable for small data sets.