Sunday, 21 May 2017

FDMEE/Data Management – All data types in Planning File Format – Part 1

Recently a new load method was added to Data Management in EPM Cloud, there was no mention of it in the announcements and new features monthly updates document so I thought I would put together a post to look at the functionality.


The new load method is called “All data types in Planning File Format” which may be new for Data Management but the core functionality has been available in the Outline Load Utility since 11.1.2.0 of on-premise planning.

The cloud documentation provides the following information:

“You can include line item detail using a LINEITEM flag in the data load file to perform incremental data loads for a child of the data load dimension based on unique driver dimension identifiers to a Oracle Hyperion Planning application. This load method specifies that data should be overwritten if a row with the specified unique identifiers already exists on the form. If the row does not exist, data is entered as long as enough child members exist under the data load dimension parent member.”

I must admit that in the past when I first read the same information in the planning documentation it wasn't clear to me how the functionality worked.

It looks like the above statement in the cloud documentation has been copied from on-premise and is a little misleading as in Data Management you don’t have to include the flag in the source file because it can be handled by data load mappings.

Before jumping into the cloud I thought it was worth covering an example with the on-premise Outline Load Utility because behind the scenes Data Management will be using the OLU.

As usual I am going to try and keep it as simple as possible and in my example I am going to load the following set of employee benefits data.


Using the LINEITEM flag method with the OLU it is possible to load the data to child members of a defined parent without having to include each member in the file, so say you need to load data to placeholders this method should make it much simpler.

You can also define unique identifiers for the data so in the above example I am going to set the identifiers as Grade and Benefit Type, this means if there is data in the source file which matches data in the planning application against both the identifiers the data will be overwritten, if not the data will be loaded against the next available child member where no data exists for the given point of view.

It should hopefully become clearer after going through the example.

I have the following placeholder members in the Account dimension where the data will be loaded to, the Account dimension will be set as the data load dimension and the member “Total Benefits” will be set as the parent in the LINEITEM flag.


The data in the source file will be loaded against the following matching members in the Property dimension, these will be defined as the driver members.


The members are a combination of Smart List, Date and numeric data types.

I created a form to display the data after it has been loaded.


Before creating the source file, there are data load settings that need to be defined within Data Load Administration in the planning application.


The Data Load Dimension is set as Account and the parent member where the data will be loaded to is set as “Total Benefits”

The Driver Dimension is set as Property and the members that match the source data are defined as Benefit Type, Grade, Start Date, Active and Value.

The Unique Identifiers in the property dimension are defined as Benefit Type and Grade.

Now on to creating the source file, if you have ever used the OLU to load data you will know that the source file will need to include the data load dimension member which in this case will the line item flag, driver members, cube name and the point of view containing the remaining members to load the data to.

The format for the line item flag is:

<LINEITEM(“Data Load Dimension Parent Member”)>

So based on the data set that was shown earlier the source file would look something like:


You may ask why does the line item flag need to be on every record when it could just be included in the parameters when calling the OLU, this would make sense if loading data to children of only one member but it is possible to load to multiple members so it needs to be included in the source file.

The final step is to load the data file using the OLU and the parameters are the same as loading any type of data file.


The parameter definitions are available in the documentation but in summary:

/A: = Application name
/U: = Planning application administrator username
/D: = Data load dimension
/M: = Generate data load fields from header record in file.
/I: = Source file
/L: = Log file
/X: = Error file

You could also include the -f: parameter to set the location of an encrypted password file to remove the requirement of entering the password manually at runtime.

After running the script the output log should confirm the status of the data load.

Planning Outline data store load process finished. 4 data records were read, 4 data records were processed, 4 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

In my example four records were successfully loaded which is what I was hoping for.

Opening the form I created earlier confirms the data has been loaded correctly.


As no data previously existed for the POV the data was loaded to the first four children of “Total Benefits” and the unique identifier members would not apply in this case.

Let us load a record of data for the same POV and to matching unique identifiers, the unique identifier has been defined as a combination of members Grade and Benefit Type 


As matching data values already exist for “Grade 1” and “Health Insurance” under “Total Benefits”, this means the data should be updated instead of data being loaded to the next available child member.


The data has been updated where the identifier data values match and in this case the Active member data has changed from Yes to No.

Now let us load a new record of data where data values don’t match for the identifier members.


In the above example there is currently no matching data values of “Grade 3” and “Health Insurance” so the data should be loaded to the next available child member of “Total Benefits” where no data exists for that POV.


The data has been loaded against next available member which is “Benefit 5” as no data previously existed for the given POV.

So what happens when you try to load data and there are no available members left.


All five child members of “Total Benefits” have data against the above POV and as there is no matching data for the unique identifier combination the load fails with the following messages.

There is no uniquely identifying child member available for this member defined in Data Load Dimension Parent. Add more child members if needed.: 
,Plan1,"Jan,No Year,Forecast,Working,110,P_000",Grade 3,Car Allowance,01-05-2017,Yes,18000

Outline data store load process finished. 1 data record was read, 1 data record was processed, 0 were accepted for loading (verify actual load with Essbase log files), 1 was rejected.


At least the log provides exactly what the issue is and how to resolve.

I am going to leave it there for this post and in the next part I will look at how the same functionality has been built into FDMEE/Data Management and go through similar examples.

Sunday, 30 April 2017

FDMEE – diving into the Essbase and Planning security mystery – Update

I wanted to provide an update to the last post I wrote on the mystery around security with on-premise FDMEE and Data Management in the cloud, in the post I went through how data is loaded when the load method is set to “All data types with security” in Data Management, I was interested in understanding the limitations and performance when using this method so I thought it was worth putting a post together on the subject.

If you have not read the previous post I recommend you do so as I will be assuming you understand the background to this topic, the posts are available at:

FDMEE – diving into the Essbase and Planning security mystery – Part 1
FDMEE – diving into the Essbase and Planning security mystery – Part 2

Just to recap if you select the load method “All data types with security” with a non-administrator and are loading to a target BSO cube then the REST API will come into play to load the data, this is only currently available in EPM Cloud and on-premise FDMEE will still load using the Outline Load utility as an administrator.

On a side note it is now possible to set the load method in data load rule instead of just at the target application level.


You will also notice that another new type of method has been snuck in which is “All data types in Planning File Format”, I didn’t see this mentioned in the announcements and new features document but I am going to cover this in a future post.

At the moment there is a slight bug with setting the load methods in the data rule.

When you create a rule the default option is “Numeric Data Only”, say you change this to “All data types with security”


Now if you want to change it back to “Numeric Data Only” it is not possible because it is not in the list of values.


If you manually type “Numeric Data Only” and try and save the rule then you will be hit with an error message.


I am sure it will be fixed at some point in the future and probably without being informed that it has been.

Anyway, back to the main topic of this post and if you load data as a non-administrator with the “All data types with security” method the REST resource that is being called is “importdataslice”.

The documentation provides the following details on the REST resource:

“Can be used to import data given a JSON data grid with a point of view, columns, and one or more data rows. Data will be imported only for cells that the user has read-write access to. Imports data of types Text, Date and Smart List along with numeric data. Returns JSON with details on the number of cells that were accepted, the number of cells that were rejected, and the first 100 cells that were rejected.”

The URL format for the REST resource is:

https://<cloudinstance>/HyperionPlanning/rest/v3/applications/<app>/plantypes/<cube>/importdataslice

As explained in the previous post the user’s security will be honoured with this method of loading data.

When exporting data from Data Management to the target planning application cube a file is created, the file contains JSON and a grid based on the data which is being loaded, this file is then read and forms the body of the post to the “importdataslice” REST resource.

Say I have a cube which has seven dimensions and I am loading the following data set.


Data Management will create the following JSON which contains a grid based on the above data.


The scenario member will be in the column and the rows will contain members for the remaining dimensions.

In terms of planning this will be the equivalent of the following form:


So what about restrictions and performance, well this is the cloud so there are restrictions and if you have ever built a large form you will know it has performance implications.

In the cloud if you try to build a form which has more than 500,000 cells when you open the form you will receive an error message.


The threshold seems to be the total number of cells in the data grid and not just data cells, as the REST resource is built on top a data grid then the following threshold should also apply to Data Management.

Now 500,000 cells may sound like a lot but with Data Management you will be used to loading data in rows and not thinking about total cell counts.

The total possible number of rows of data will also depend on how dimensions are in the cube you are loading data to.

I calculated that with the cube I am loading data to it should be theoretically possible to load to 71,426 rows before hitting the threshold limit.

Just to make sure I added an extra row of data to see what the output from Data Management would be.


The export process failed and checking the log confirmed the reason behind it.

11:38:02,593 DEBUG [AIF]: Overrode info.loadMethod for the non-admin user: REST
11:38:02,688 DEBUG [AIF]: EssbaseService.performLoadRest - START

11:38:02,728 DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/REF/plantypes/Plan1/importdataslice
11:56:41,651 ERROR [AIF]: The rest service request has failed: 400 Bad Request - {"detail":"Unable to load the data entry form as the number of data entry cells exceeded the threshold.\n\nCriteria: Number of cells\nError Threshold: 500000\nWarning Threshold: 250000\nCurrent Value: 500003","status":400,"message":"com.hyperion.planning.governor.HspGovernorThresholdException: Unable to load the data entry form as the number of data entry cells exceeded the threshold.\n\nCriteria: Number of cells\nError Threshold: 500000\nWarning Threshold: 250000\nCurrent Value: 500003","localizedMessage":"com.hyperion.planning.governor.HspGovernorThresholdException: Unable to load the data entry form as the number of data entry cells exceeded the threshold.\n\nCriteria: Number of cells\nError Threshold: 500000\nWarning Threshold: 250000\nCurrent Value: 500003"}

11:56:41,655 INFO  [AIF]: EssbaseService.loadData - END (false)

As expected the threshold limit error appears in the JSON response from the REST resource, what is concerning is that the error was not generated straight away, if you open a planning form the threshold error message will be generated straight away, with the REST resource it took 18 minutes before erroring out, you also get an idea of the performance implications of using the REST resource, 18 minutes to load 71,427 rows of data is certainly not performant and just to be clear that no data is loaded if the threshold limit is hit.

Now that I know the limit is the same as with data forms I can reduce the number of records.


This time the number of records should equate to just less than the 500,000 cell limit.


Failed again, time to look in the process logs.

12:40:25,072 DEBUG [AIF]: EssbaseService.performLoadRest – START
12:40:25,103 DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/REF/plantypes/Plan1/importdataslice

12:58:48,362 ERROR [AIF]: The rest service request has failed: 400 Bad Request - {"detail":"The form RestDataGrid_1493383281849 is very large and could take considerable time to load. Do you wish to continue?","status":400,"message":"com.hyperion.planning.FormWarningException: The form RestDataGrid_1493383281849 is very large and could take considerable time to load. Do you wish to continue?","localizedMessage":"com.hyperion.planning.FormWarningException: The form RestDataGrid_1493383281849 is very large and could take considerable time to load. Do you wish to continue?"}

This time the error seems a little ridiculous because it looks like a generic error related to opening large forms and has little relevance to the data load I am trying to perform, If I open a data form in planning which is just below the threshold the same message is not generated.

I tried reducing the number of rows by 10,000 and then another 10,000 but still the same error was being generated, then I remembered seeing the error message when opening large forms in planning before version 11.1.2.2

I went back to trying to open a large form in 11.1.2.1 and this confirmed it is the same message.


So it looks like the REST resource is using the same data grid functionality that existed in older versions of planning.

In the versions of planning where this message was generated it was possible to set a threshold in the display options of the application.


If you look at the display options in the cloud or on-premise versions from 11.1.2.2 then the setting is not available.


I had a look around for the setting in the UI but unless I have missed it hidden somewhere I couldn’t find where to set it.

Before looking any further I wanted to be sure that it was the same setting and in 11.1.2.1 where the default value was 5000.

I thought the setting meant total cells in the grid but from testing it looks like it is the number of data entry cells, I tested by loading 5000 records from Data Management.


This time the export was successful.


The process log shows that the REST resource was successful and confirms the number of rows of data that were loaded.

14:50:22,865 DEBUG [AIF]: EssbaseService.performLoadRest - START
14:50:22,869 DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/REF/plantypes/Plan1/importdataslice
14:51:07,384 INFO  [AIF]: Number of rows loaded: 5000, Number of rows rejected: 0

I tested again with 5001 records and the load failed.

2017-04-28 15:01:07,458 DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/REF/plantypes/Plan1/importdataslice
2017-04-28 15:01:47,192 ERROR [AIF]: The rest service request has failed: 400 Bad Request - {"detail":"The form RestDataGrid_1493391668666 is very large and could take considerable time to load. Do you wish to continue?

So it looks like it is picking up the setting of 5000 that used to be around in planning in the past.

I then went to see if I could find the setting and ended up exporting the application settings using migration in the planning UI.


This exports an XML file named “Application Setting.xml” and in that file there is a reference to a form warning setting with a value of 5000.


This looks promising so I updated the value to one that would be higher than the number of records that would be loaded through Data Management.


I Imported the application settings back into the application using migration.

Now to try the data load again with a total number of records just less than the main threshold.


Finally, I am able to successfully load the data.


The process log confirms the total number of rows loaded.

INFO  [AIF]: Number of rows loaded: 71426, Number of rows rejected: 0

Well what about performance, let us compare the timings for the different load methods with the same amount of records as above.

Setting the load method to “Numeric Data Only” which means a data load file is produced and then loaded using an Essbase data load rule.

20:38:45,064 INFO  [AIF]: Loading data into cube using data file...
20:38:45,936 INFO  [AIF]: The data has been loaded by the rule file.

The data load time was just under a second.

Now for “All data types with security” with an administrator which means the Outline Load utility be the method for loading.

20:42:40 Successfully located and opened input file.
20:42:49 Outline data store load process finished. 71426 data records were read, 71427 data records were processed, 71427 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

This data load took a similar time of just under a second.

Time for the REST method with a non-administrator.

20:54:46,385 DEBUG [AIF]: EssbaseService.performLoadRest - START
21:14:43,146 INFO  [AIF]: Number of rows loaded: 71426, Number of rows rejected: 0

A data load time of just under 20 minutes.

So using the REST method takes a massive amount of time compared to the other two methods, as this is EPM cloud it is not possible to see the system resources being used by this method but I can imagine it is heavy compared to the other two.

If you are thinking of using this method to load data then consider the limitations and performance impact it has, it is probably only advisable for small data sets.


Sunday, 16 April 2017

EPM Cloud - masking data

One of the new features in release 17.04 is the ability to mask data using the EPM Automate utility which also means it should be available through the REST API, the release documentation provides the following information:

“A new version of the EPM Automate Utility is available with this update. This version includes the maskData command, which masks all application data to ensure data privacy. You use this command only on test instances to hide sensitive data from users; for example, application developers, who should not access your business data.”

While the documentation says only to use to only on test instances it should be possible to run the command on a production instance but the reality is you probably wouldn’t want to do that anyway.

The command is available for FCCS, PBCS, ePBCS and TRCS

It is worth pointing out before using the command is that it will update all data and make it meaningless so make sure you have a backup like a maintenance snapshot and run it against the correct instance as there is no going back.

The EPM Automate command syntax is:

epmautomate maskdata –f

-f is optional and suppresses the user input to confirm whether to run the command, you would only really use the parameter if you are going to be automating the process of masking data.

So let’s give it a test drive and run the command.


The output from the command gives us a little more of an insight to what is going on behind the scenes, as this is Software as a Service we rarely get to understand the full story of what is happening.

The process flow is for each database in the application:
  • Export all data in masked format
  • Clear all data from cube
  • Load masked data back into the cube
This applies to all cubes within the application including ASO, for BSO it is a full export so depending on the size of the databases the command could take a while to complete, I will go into more detail on how I believe the masking of data is being achieved shortly.

The following form shows an example of the data before running the mask data command.


After running the command the data has been masked and is now totally meaningless.


At the start of this post I said that the functionality should be available through REST, if you are not aware the EPM Automate utility is basically built on top of REST so if the functionality is in the utility then it should be possible to access it through REST.

The URL format for the REST resource is:

https://<cloudinstance>/interop/rest/v1/services/maskdata

I can replicate this with a REST client using a POST method.


The above response includes a URL which can then be accessed to check the status of the data masking.


A status of -1 means the process is running and a status of 0 indicates the process has completed, any other value would mean the process has failed.

For automation purposes this can be easily scripted in your preferred language, I have put together an example PowerShell script that calls the mask data REST resource and then keeps checking the status until the process has completed.


So how is the masking of data achieved, well I know some people believe that if you put the word cloud into any sentence then some kind of magic occurs but unfortunately I can’t always accept that, the clue was there when running the EPM Automate “maskdata” command:

“This command will export the data in masked format”

In the on-premise world a new feature was introduced into 11.1.2.4 MaxL and the following is taken from the Essbase new features readme.

“The MaxL export data statement includes grammar you can use to make exported data anonymous, wherein real data is replaced with generated values. This removes the risk of sensitive data disclosure, and can be used in case a model needs to be provided to technical support for reproduction of certain issues.”

The Essbase tech ref provides additional information on how the data is masked, for BSO

“Export data in anonymized format. Anonymization removes the risk of sensitive data disclosure, and can be used in case sample data needs to be provided for technical support. Essbase replaces real data values with incremental values beginning with 0, increasing by 1 for each value in the block.”

If I take the Sample Basic BSO database for simplicity I can demonstrate what is happening in the cloud using MaxL.


The above example is a full data export using the anonymous syntax and the output shows how the cells in each block have been replaced with incremental values.


I know the basic database usually has scenario as a dense dimension but I updated it to sparse for this example.

Each block in the database will have the same incremental values including upper level blocks which means the aggregated values for stored upper level members will be incorrect.

For on-premise if you wanted the data to be aggregated correctly you could run the anonymous export for level 0, clear, load and then aggregate. For the cloud, you don’t have that control so you could run the mask data command, clear upper level data and then aggregate with a business rule.

A spreadsheet retrieve highlights the difference before and after a masking the data.


Moving on to ASO which uses a slightly different approach to masking the data, the documentation provides further details on this:

“Export data in anonymized format. Anonymization removes the risk of sensitive data disclosure, and can be used in case sample data needs to be provided for technical support. Essbase replaces real data values with 1, for each value in the block.”

I am not sure I agree with the statement about block and would prefer for it to say input level cell, also the way the data is anonymized is quite primitive but at least it shows a true reflective of where the data did exist.

Anyway, let us take another example of masking the data by using the anonymous syntax against the ASO sample application.


The exported data contains all the level 0 data values replaced with a 1.


Another spreadsheet retrieve shows the difference of before and after anonymizing the data.



I am going to leave it there but hopefully you now understand the concept of how the data mask functionality works, even though it is a cloud command the process can be replicated on-premise and with a greater level of flexibility.

Thursday, 30 March 2017

FDMEE – diving into the Essbase and Planning security mystery – Part 2

In the last part I took an in-depth look at the current situation with on-premise FDMEE and user based security when the target is either Essbase or Planning, I pretty much concluded that a user’s security is not honoured and a global admin user must be defined which basically means no security restrictions on the data that is being loaded.

In this part I am going to go through the same process but for EPM Cloud to see how it differs, the cloud products I will be concentrating on are PBCS including enterprise and FCCS.

Let us dive right in and look at loading data to a target planning application, at present there are two load methods available and we will start with “Numeric Data Only”


This method is the equivalent to on-premise load method of “Numeric Data only – File” but in the cloud the difference is that it can only be defined at application level, for on-premise it can be set at load rule level which makes more sense, I am not sure why it has not been implemented like that in the cloud but I am sure it will change and probably without any warning.

As always I am keeping things nice and simple and will be loading two rows of data.


I will start with a user that has been provisioned with the role of “Service Administrator” which is the same as a planning application administrator.


As expected the process ran through smoothly and the process log provides details on how the data is being loaded.

INFO  [AIF]: cloudServiceType: Planning, Resolved user name for application access: epm_default_cloud_admin
DEBUG [AIF]: Resolved essbase rule file name for loading: AIF0029
DEBUG [AIF]: Fetching rule file from essbase server for data loading: AIF0029
DEBUG [AIF]: Locked rule file: AIF0029
INFO  [AIF]: Loading data into cube using data file...
INFO  [AIF]: The data has been loaded by the rule file.
DEBUG [AIF]: Unlocked rule file: AIF0029

The process is the same as with on-premise where a data file and load rule are created then the data is loaded using the load rule.

The difference between cloud and on-premise is the user that has loaded the data is the default cloud admin and not the user running the export, the ‘epm_default_cloud_admin’ is a system generated account to carry out admin type duties and there is no way to change the behaviour of this account.

To be sure I provisioned a user with the power user role and ran the export again.


The process log confirms that the user running the export is being overridden with the default cloud admin account

INFO  [AIF]: cloudServiceType: Planning, Resolved user name for application access: epm_default_cloud_admin
DEBUG [AIF]: Resolved essbase rule file name for loading: AIF0029
INFO  [AIF]: The data has been loaded by the rule file.

In the cloud, there is no option to set a global user so when loading data using the "Numeric Data Only" method the user will be ignored and the admin account will take control of the process, this also means that it is not possible to honour the user’s data level security using this method.

So, in some respects there is a similarity between cloud and on-premise when using this method as for it to function correctly it requires an admin type role and there is no control on restricting the security at data level.

Let us switch back to a service administrator user and set the load method to “All data types with security”


You would usually set this method if you are planning on loading non-numeric data which I previously wrote a post about, when this method was first introduced it was called “HPL” and then was subsequently changed.

The method includes reference to security so maybe it is inclination to it operating differently.

The export was successfully run again.



The process log contains important details to what is happening behind the scenes.

INFO  [AIF]: cloudServiceType: Planning, Resolved user name for application access: John.Goodwin@mtl.com
DEBUG [AIF]: Overrode info.loadMethod for the admin user: OLU
Outline data store load process finished. 2 data records were read, 3 data records were processed, 3 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

This time the default cloud admin has not overridden the user running the export and the method to load the data is the Outline Load Utility (OLU), as this user has the service administrator role and security restrictions on the data are not important the OLU can be used, in the last part I showed what happens if the user is not an admin and tries to load data using the OLU.

Now let’s look at what happens with a user that has a power user role assigned which I believe is the minimum role level requirement for Data Management access.  please note I have not assigned any access permissions for the user in the planning application yet as I just want to show what happens.

The export is run and it fails.


This time in the process log there are noticeable differences to the way the data is being loaded.

INFO  [AIF]: cloudServiceType: Planning, Resolved user name for application access: l.howlett@mtl.com
DEBUG [AIF]: Overrode info.loadMethod for the non-admin user: REST
DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/Vision/plantypes/Plan1/importdataslice
ERROR [AIF]: The rest service request has failed: 400 Bad Request - {"status":400,"message":"java.lang.NullPointerException","localizedMessage":"java.lang.NullPointerException"}

There is no overriding of the user performing the export and as the user is not an admin the method for loading data is with the REST API.

If you are not aware there are REST resources available for importing, exporting and deleting data by building up a data grid which I will go into more detail shortly.

The REST call failed and the reason behind this even though the error message is not clear is because no access permissions have been defined.

Now I am going to add access permissions but not for all dimensions and run the export again.



The export failed again which was to be expected and now the error message that is produced is something you should be familiar with if you have worked with planning.

INFO  [AIF]: cloudServiceType: Planning, Resolved user name for application access: l.howlett@mtl.com 
DEBUG [AIF]: Overrode info.loadMethod for the non-admin user: REST
DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/Vision/plantypes/Plan1/importdataslice
ERROR [AIF]: The rest service request has failed: 400 Bad Request - {"status":400,"detail":"You are trying to open the form, but cannot because all of the required dimensions are not present. Possible causes may be that you do not have access to at least one member of a required dimension, or the member selection resulted in no members present. Contact your administrator."}

The error message is the same one you receive when a user opens a form and does not have all the required access permissions to the members in the form, so it looks like the REST resource sits on top of existing functionality built into forms.

I updated the access permissions so the user has write access to all the members contained in the data load.


This time the export was successful and the process log confirms the number of rows loaded.

INFO  [AIF]: cloudServiceType: Planning, Resolved user name for application access: l.howlett@mtl.com
DEBUG [AIF]: Overrode info.loadMethod for the non-admin user: REST
DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/Vision/plantypes/Plan1/importdataslice
INFO  [AIF]: Number of rows loaded: 2, Number of rows rejected: 0

Opening a form with the same user shows that the data was correctly loaded.


So what is happening behind the scenes to load the data using the REST API, well when an export is initiated a file is created which contains JSON and a grid based on the data which is being loaded.


The JSON in the file is then posted to the "importdataslice" REST resource and the user details running the export are passed in so that the security is honoured.

The URL format for the REST resource is:
https://<cloudinstance>/HyperionPlanning/rest/v3/applications/<app>/plantypes/<cube>/importdataslice

An example of the JSON that is being generated for the two rows of data is:


The response that is returned is in JSON format and contains information about the number of cells that were accepted and rejected.


The JSON grid that is being generated in my example would look like the following in terms of a planning form.


To demonstrate what happens with rejected cells I updated the JSON to include an account member “4110” which the user does not have access to.


The response shows that one cell was rejected and contains the row of the grid that was rejected, it does not provide the actual member that was rejected though.


I will take another example in Data Management and load four rows of data, the user does not have access to entity member “111” so in theory two rows of data should be rejected.


I was expecting the export process to be successful but contain warnings but it looks like if any invalid data is encountered the process status is shown as a failure.



Investigating the process logs shows that two rows were loaded and two were rejected which is what I would expect, the list of the rejected rows are written to the log and are generated from the REST response like in my previous example.

INFO  [AIF]: cloudServiceType: Planning, Resolved user name for application access: l.howlett@mtl.com
DEBUG [AIF]: Overrode info.loadMethod for the non-admin user: REST
DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/Vision/plantypes/Plan1/importdataslice
INFO  [AIF]: Number of rows loaded: 2, Number of rows rejected: 2
INFO  [AIF]: List of rejected cells: ["[Actual, 1520, 111, No Version, BaseData, P_000, FY17, Mar]","[Actual, 2210, 111, No Version, BaseData, P_000, FY17, Mar]"]

I was interested to know whether valid intersections would restrict the data being loaded, as the REST functionality looks to be built on top of form data grids then the logic should apply.

I updated the users access permissions so they could write to entity members “110”, “111” and then restricted permissions to entity “111” using a valid combination.


The export failed.



The rows containing entity member “111” were rejected so valid intersections are honoured when loading data using the REST method.

INFO  [AIF]: Number of rows loaded: 2, Number of rows rejected: 2
INFO  [AIF]: List of rejected cells: ["[Actual, 2210, 111, No Version, BaseData, P_000, FY17, Mar]","[Actual, 1520, 111, No Version, BaseData, P_000, FY17, Mar]"]

My previous examples have all been using PBCS which also applies to E-PBCS so how about FCCS, with FCCS the load type can be set a rule level.


There are only two options available which are Data and Journal.


I am not going to bother covering the journal option as setting this option will generate a Journal in FCCS and this does not relate to what this post is about.

I loaded the following data set with a user that has the Service Administrator role applied.


The process log confirms that when using the “Data” load type in FCCS it acts in the same way as “All data types with security” in PBCS, if the user has the Service Administrator role it will load the data using the Outline load utility.

INFO  [AIF]: cloudServiceType: FCCS, Resolved user name for application access: John.Goodwin@mtl.com
DEBUG [AIF]: Overrode info.loadMethod for the admin user: OLU
INFO  [AIF]: Number of rows loaded: 1, Number of rows rejected: 0

If I switch to a power user with the correct access permissions for the data that is being loaded, then the export is successful.


INFO  [AIF]: cloudServiceType: FCCS, Resolved user name for application access: l.howlett@mtl.com
DEBUG [AIF]: Overrode info.loadMethod for the non-admin user: REST
DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/FCCS/plantypes/Consol/importdataslice
INFO  [AIF]: Number of rows loaded: 1, Number of rows rejected: 0

The method to load the data is with the REST API, this is basically the same as PBCS which means the user’s access permissions will be honoured.

The only concern I have about using the REST method would be the performance implications of loading a large amount of data as it will be the equivalent of creating a huge form, I have not had the chance yet to test whether it does impact performance.

That covers when the target is Planning so how about when an ASO Essbase cube is the target.

The load methods are the same for Essbase as they are for Planning and currently can only be set in the target application options, I going to start with “Numeric Data Only


The following data set is loaded by a Service Administrator.


When using the numeric data method then concept is the same as with Planning, the default cloud admin overrides, a data file and load rule are created, the data is loaded with the rule.

INFO  [AIF]: cloudServiceType: Planning, Resolved user name for application access: epm_default_cloud_admin
DEBUG [AIF]: Resolved essbase rule file name for loading: AIF0031
DEBUG [AIF]: Fetching rule file from essbase server for data loading: AIF0031
DEBUG [AIF]: Locked rule file: AIF0031
INFO  [AIF]: Getting load buffer for ASO data load...
INFO  [AIF]: Initializing load buffer [1]
INFO  [AIF]: Successfully initialized the load buffer
INFO  [AIF]: Loading data into cube using data file...
INFO  [AIF]: The load buffer [1] has been closed.
INFO  [AIF]: The data has been loaded by the rule file.
DEBUG [AIF]: Unlocked rule file: AIF0031

Now to switch over to a power user and repeat the process.


Once again the process is the same as when Planning is the target and the default admin overrides and loads the data.

INFO  [AIF]: cloudServiceType: Planning, Resolved user name for application access: epm_default_cloud_admin
DEBUG [AIF]: Resolved essbase rule file name for loading: AIF0031
DEBUG [AIF]: Fetching rule file from essbase server for data loading: AIF0031
DEBUG [AIF]: Locked rule file: AIF0031
INFO  [AIF]: Getting load buffer for ASO data load...
INFO  [AIF]: Initializing load buffer [1]
INFO  [AIF]: Successfully initialized the load buffer
INFO  [AIF]: Loading data into cube using data file...
INFO  [AIF]: The load buffer [1] has been closed.
INFO  [AIF]: The data has been loaded by the rule file.
DEBUG [AIF]: Unlocked rule file: AIF0031

How about setting the load method to “All data types with security


I will try with the Service Administrator user as there shouldn’t be any problems.


Spoke to soon, the process failed so time to look at the logs.

INFO  [AIF]: cloudServiceType: Planning, Resolved user name for application access: epm_default_cloud_admin
ERROR [AIF]: Essbase ASO application is not created from any Planning application, please use other data load methods.

Hold on that error does not make sense, this is PBCS so it must be created from the Planning application there is no other way.

I tried with the power user and received the same error message so I decided to test whether I could load data to the ASO cube using the REST API, it should be possible because I can create a form and enter data against the ASO cube.


The response confirms that it is possible to use the REST resource to load data to an ASO cube.


At the moment I am not sure why it is possible to select the “All data types with security” method if it doesn’t work, maybe I am missing something or it is a bug or a feature that will be implemented at a later stage. If I find out any further information I will update this post.

That is all I am going to cover on the different security type behaviour between admin and non-admin users for on-premise, hybrid and cloud. It certainly can be confusing with all the possible scenarios but hopefully I have cleared it up over the two posts.