Friday, 26 May 2017

FDMEE/Data Management – All data types with auto-increment of line item – Part 2

Moving swiftly on to the second part where I am going to look at the “all data types in planning file format” load type in Data Management, the functionality is basically the same as what I covered in the last part with the on-premise Outline Load Utility but has now been built into Data Management, I am hoping that by providing the examples in the last post it will make more sense when setting up the integration in Data Management.

Currently the functionality only exists in EPM Cloud but I would expect this functionality to be pushed down to on-premise FDMEE possibly when 11.1.2.4.220 is released, I will update this post once it is available.

Once again I will start out with the same employee benefits source data but this time the file can be kept much simpler as Data Management will handle the rest.


Just like with on-premise the data load settings need to be applied and these can be accessed through the navigator under integration.


It is a shame that these settings cannot be dynamically generated or be defined in Data Management instead of having to set them in planning.

On to Data Management and creating the import format, the file type is set to “Multi Column – All Data Type”.


In the import format mappings, I have basically fixed the members to load to by entering them into the expression field, this replicates the example in the last part using the OLU and fixing the POV in the file.


For the account dimension I could have entered any value in the expression field as it will be mapped using the line item flag in the data load mappings.

The Data dimension will be defined by selecting add expression and choosing Driver, I explained this method in detail in a previous blog on loading non-numeric data.


Basically the driver dimension is selected which in my example is the Property dimension, the first row is the header row and contains the driver members and the data is across five columns.


The mapping expression window provides examples if you are unsure of the format and the expression field will be updated with the values entered.


The data rule is created in the same way as any other integration.



The difference comes when setting up the target options, the load method this time will be “All data types in Planning File Format”.

There are also properties to define the Data load and Driver dimensions, these will match what has been set in the Data Load Settings in planning.


Seems a bit excessive having to set the driver dimension in planning, in the import format and in the data load rule, it would be nice if all these settings could be applied in one place in Data Management.

There is only one difference with data load mappings and that is for the data load dimension the LINEITEM format must be used.


The target value will need to be manually entered with the data load dimension parent member but after going through my example with the OLU it should be clearer to why it is required.

On to the data load and the data columns in the file will be converted into rows in the workbench.


In my source file there are five columns and four rows of data so a total of twenty records are displayed in the workbench.

The final step is export and load the data in to the planning application.


All good but a quick check of the data form in planning and something is not right.


Only the equivalent of one row of data has been loaded and the data that has been loaded is not correct.

The process log confirms that the outline load utility is definitely being used to load the data just like with the earlier example I went through, though in this case only one row has been processed and loaded.

13:10:36 UTC 2017]Outline data store load process finished. 1 data record was read, 1 data record was processed, 1 was accepted for loading (verify actual load with Essbase log files), 0 were rejected.

13:10:36,825 INFO  [AIF]: Number of rows loaded: 1, Number of rows rejected: 0


I checked the file that Data Management had generated before loading with the OLU and even though the format is correct there was only one record of incorrect data in the file.


The file should have been generated like:


The file is generated by converting rows into columns by using the Oracle database pivot query and outputting driver members and values as XML.

13:10:29,528 DEBUG [AIF]: SELECT * FROM ( SELECT ACCOUNT,UD4,DATA,'"'||ENTITY||','||UD1||','||UD2||','||UD3||','||SCENARIO||','||YEAR||','||PERIOD||'"' "Point-of-View"
                      ,'Plan1' "Data Load Cube Name" FROM AIF_HS_BALANCES WHERE LOADID = 598 )
PIVOT XML( MAX(DATA) FOR (UD4) IN (SELECT UD4 FROM AIF_HS_BALANCES WHERE LOADID = 598) )

I replicated the data load in on-premise FDMEE, ran the same SQL query and only one row was returned.


The query returns the driver members and values as XML which then must be converted into columns when generating the output file.


At this point I thought it might be a bug but thanks to Francisco for helping keep my sanity, I was missing a vital link which was not documented, I am sure the documentation will get updated at some point to include the missing information.

If you have records that are against the same POV then you need a way of making the data unique so that when the SQL query is run all rows are returned, this is achieved by adding a lookup dimension and identifying a driver member that will make the data unique.

If you take the data set I am loading the driver member “Grade” values are unique so this can be defined as a lookup dimension.

To do this you first add a new lookup dimension to the target application.


The lookup dimension name must start with “LineItemKey” and depending on the data that is being loaded you may need multiple lookup dimensions to make the records unique.

Next in the import format mappings the dimension should be mapped to a column containing the driver member.


The “Grade” member is in the first column in my source file so I map the lookup dimension to that.

After adding a like for like data load mapping for the lookup dimension the full load process can be run again.


The workbench includes the lookup dimension and is mapped to the driver member Grade.
The SQL statement to generate the file now includes the lookup dimension which was defined as column UD5 in the target application dimension details.

17:16:36,836 DEBUG [AIF]: SELECT * FROM ( SELECT ACCOUNT,UD4,DATA,'"'||ENTITY||','||UD1||','||UD2||','||UD3||','||SCENARIO||','||YEAR||','||PERIOD||'"' "Point-of-View"
                      ,'Plan1' "Data Load Cube Name" ,UD5 FROM AIF_HS_BALANCES WHERE LOADID = 634 )
PIVOT XML( MAX(DATA) FOR (UD4) IN (SELECT UD4 FROM AIF_HS_BALANCES WHERE LOADID = 634) )

17:16:36,980 INFO  [AIF]: Data file creation complete


Once again I replicated in on-premise and the query correctly returns four records.


Even though the query results include the lookup dimension this will be excluded when the output file is created.


This time the process log shows that four records have been loaded using the OLU.

17:16:49 UTC 2017]Outline data store load process finished. 4 data records were read, 4 data records were processed, 4 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

17:16:49,266 INFO  [AIF]: Number of rows loaded: 4, Number of rows rejected: 0


The planning form also confirms the data has been successfully loaded and is correct.


Now that I have the integration working I can test out the rest of the functionality, I am going to load a new set of data but where data already exists for the unique identifier driver members


The unique identifier members are “Grade” and “Benefit Type”, data already exists under “Total Benefits” for “Grade1” and “Health Insurance” so the data being loaded should replace the existing data.


The data has been overwritten as the value for Active has been changed from “Yes” to “No”

Now let us load a new set of data where there is no matching data for the unique identifiers.


Before the load there was no data for “Grade 3” so the data should be loaded to the next available child member of “Total Benefits” where no data exists for the given POV.


The data has been loaded against next available member which is “Benefit 5” as no data previously existed for the given POV.

Next to test what happens when loading a data set with no matching driver member identifiers now that all child members of the data load dimension parent are already populated.


The export fails and the process log contains the same error as shown when testing the OLU as in the last post.

13:21:07 UTC 2017]com.hyperion.planning.HspRuntimeException: There is no uniquely identifying child member available for this member defined in Data Load Dimension Parent. Add more child members if needed.

13:21:07 UTC 2017]Outline data store load process finished. 1 data record was read, 1 data record was processed, 0 were accepted for loading (verify actual load with Essbase log files), 1 was rejected.


As the log suggests in order for the export to succeed additional members would need to be added under the data load dimension parent.

Since adding the lookup dimension all the data values have been unique for the “Grade” member so there have been no problems, if I try and load a new set of data where the values are no longer unique you can probably imagine what is going to happen.


The above data set contains “Grade 1” twice so now the lookup dimension is not unique and even though the load is successful we are back to where we were earlier with one record of incorrect data being loaded.


This means another lookup dimension is required to make the data unique again so I added a new lookup dimension, mapped it to the “Benefit Type” column in the import format, created a new data load mapping for the new dimension and ran the process again.


In the workbench, there are now two lookup dimensions present which should make the data unique when creating the export file.


Much better, the data loaded to planning is as expected.

In the whole the functionality in Data Management acts in the same way as when using the on-premise Outline Load Utility, I do feel the setup process could be made slicker and you really need to understand the data as if you don’t define the lookup dimensions to handle the uniqueness correctly you could end up with invalid data being loaded to planning.

1 comment:

  1. G'day John,

    Yep - you've definitely answered my question in the second post!

    We'll have a play with it - in some ways it's a much cleaner format for certain data sets, but I agree completely that it feels much more complicated with all the steps.

    Biggest concern is that you wouldn't necessarily want to put it in as it currently stands - because they are likely to fix it all in a monthly patchset.

    Thanks, as always, for spending the time to write it up!

    Cheers
    Pete

    ReplyDelete

Note: only a member of this blog may post a comment.