Saturday, 2 December 2017

EPM Cloud – Data Management – Loading metadata is here

Well it is certainly a piece of functionality that has been on the roadmap for a long time but finally it has arrived in the 17.11 EPM Cloud release.

I must admit I was quite surprised that the functionality has been added as the Enterprise Data Management Cloud Service offering will be released in the very near future, EDMCS will include application adaptors which will allow metadata integration with EPM Cloud, I thought Oracle would try and push everyone down this route for metadata but clearly not.

First of all, loading metadata is now available for any of the EPM cloud services that are built on the planning and budgeting platform, so this means it is supported for the following:
  • Oracle Planning and Budgeting Cloud Service
  • Oracle Enterprise Planning and Budgeting Cloud Service
  • Oracle Financial Consolidation and Closing Cloud Service
  • Oracle Tax Reporting Cloud Service 
The only supported source in this release is from a flat file, loading metadata is available for standard, custom and Smart List dimensions.

The format for the metadata is based on the same properties as importing metadata directly into a target EPM cloud application so there should not be much of a learning curve, behind the scenes the outload load utility (OLU) will load the metadata which should be familiar if you have an on-premise planning background.

To test out the functionality I am going to go through the process of loading a couple of standard dimensions, a custom one and a Smart List.

To ensure I am testing with valid metadata I took exports of the Account, Entity, Product and Smart List dimension from the Vision sample application, I edited the exported csv files to include a selection of the most important properties.


I confirmed the files were valid by loading the CSV files back into the planning application, clearing all the members first. After successfully testing I cleared down the dimensions so they are empty again.

In theory, because I am using the same source files I shouldn’t encounter any problems loading the files through Data Management, I said in theory because reality can be totally different.

In Data Management if you attempt to add a new application you will see there is new option available called “Dimensions”


After selecting “Dimensions” a new window will be displayed where a registered application can be selected and a prefix entered.


The prefix is optional but will probably be a good option if you plan on carrying out multiple dimension builds with different properties enabled.

Six dimension type target applications will be automatically created, these are Account, Entity, Scenario, Version, Custom and Smart List.


The format for name of the dimension application will be:

“Prefix” + “Application Name – “+ “Dimension Type”

In the dimension details where you would be used to seeing dimension names there is a list of dimension properties.


This is the same concept in one of my previous blogs where I created a custom application to build Essbase dimensions.

By default, the properties that are enabled are the dimension member so in this case Account, Parent and Alias.

The properties that are selected will be available in the import format so you should select only the ones you are going to use, it is possible to create additional dimensions using the prefix option so you can have metadata builds with different properties enabled.

If you enable a property and save the data table column name will be populated.


The column names don’t really make that much sense for metadata because they are the same ones that are used for data loading, they are just the names of the columns in the Data Management relational database tables and not something you should get too concerned about, the main thing is that once a property is enabled it is mapped to a table column name.

Each dimension type will have a different set of properties, so for example the Entity dimension will have a property called “Base Currency” because it is only applicable to Entities.

If a property is not in the list and is required then it is possible to manually add one.


The source file I am going to use load the accounts metadata is:


The properties were enabled to match that of the file.


In the dimension application options, there are properties available to define the dimension name and refresh the database, these settings can also be applied at load rule level, the dimension name only really needs to be set if you are using a custom dimension or want to override the dimension that the metadata will be loaded to.


On to the import format, the target dimension was selected and the file type set as “Delimited – All Data Type” which is a requirement for loading metadata, it is probably so it switches the load over to the OLU.


The properties that were enabled are made available as targets in the import format mappings.


I mapped each column of the source file to the relevant target property, the data target can be ignored as it is not required, it appears as it standard when using an all data type.

The location is the same concept as with data loading and the import format was selected.


I created a new category called “Metadata” just to make the POV more meaningful, an existing one could have been used.


I didn’t opt for creating a new period mapping and stuck with an existing mapping.


I could have created a period mapping just for loading metadata but I thought by going down the route of loading by period the data will be stored and not overwritten each period, this could be useful if you want to view the metadata that was loaded against a given period.

Each of the dimension properties will be available to apply mapping to.


The mappings operate in the same as with data loading so it should not be anything new to you.

For this first example, I stuck with like for like mappings as the source file is in the correct format.


When creating a data load rule against a standard dimension you will see that the plan type is automatically set as “REGULAR”.


I uploaded and selected the source file.

In the rule target options, there are the same properties that are available at target application dimension level.


The dimension name can be left blank unless it needs to be overridden.

There is also the option to refresh the database meaning after the metadata is loaded to the target application it will be pushed down to Essbase, you would usually set this to Yes if you plan to load data after the metadata load.

If you plan on loading multiple dimensions it is probably best to set the refresh option to yes for the last metadata load, otherwise you will be refreshing after each load which is definitely not efficient.

Now that everything is in place it is time to import the metadata into the workbench.


The mapping validation failed and here lies the first problem, there are rows of metadata where the source will be null which should be perfectly acceptable, in the above example the validation failed due to there being no member alias.

For metadata, you would want to keep the target mapping as null where the source is null, as Data Management has been primarily designed to load data a target mapping value is usually required and fails if one doesn’t exist.

Whether this is the correct way or not I created a SQL mapping to map null values to a space, a space should be ignored when loading to a target application.


After adding this mapping, the validation issue was fixed.


The workbench will have columns for source/target data which can be ignored as they only apply to loading all data type data.


On to the export which will load the metadata to the target application.


Next problem, the export failed.

If any rejections occur in the load to the target application the process will fail, the valid records will still be loaded.


The process log contained the following information.

Possible circular reference detected, aborting sort and continuing with Load.  76 possible circular reference records found.
com.hyperion.planning.InvalidMemberException: The member 0001 does not exist for the specified cube or you do not have access to it.
com.hyperion.planning.InvalidMemberException: The member 0002 does not exist for the specified cube or you do not have access to it.
……… and so on
Outline data store load process finished. 266 data records were read, 266 data records were processed, 194 were accepted for loading (verify actual load with Essbase log files), 72 were rejected.


So for some reason a circular reference was found, this stopped the sorting of the records which subsequently meant records were rejected because parent members did not exist.

I had a look at the file that Data Management produced and tried to load against the target application, the file was the same as my original source file except for the order.

I tested loading the file that Data Management produced directly into the planning application using the import functionality and it failed with the same type of errors, I then loaded my original source file and it imported perfectly.

It looks like the issue is down to a combination of the order of the members in the file that is being created by DM and that there are shared members involved.

The account dimension that was built looks like:


If I load the source file (not the one DM produces) directly to planning then it looks like:


Not only is there a problem with members being rejected, the order of the members is different.

Basically what happens is Data Management executes the following SQL to produce the file that is then loaded to the application.

DEBUG [AIF]: SELECT ACCOUNT,ENTITY,ICP,UD1,UD2,UD3,UD4,UD5,UD6,UD7,UD8
 FROM AIF_HS_BALANCES
 WHERE LOADID = 31
 ORDER BY 2,1

The order is defined by parent then member and in ascending order, so it looks like it doesn’t matter what order your source file is in, the target order will not match this and it doesn’t look like there is currently anyway to change this, this may not be an issue but something to be aware of.

Update: This issue should have been resolved as the data is ordered by the DATAKEY column so the order it was inserted

DEBUG [AIF]: SELECT ACCOUNT,ENTITY,ICP,UD1,UD2,UD3,UD4,UD5,UD6,UD7,UD8
 FROM AIF_HS_BALANCES
 WHERE LOADID = 31
ORDER BY DATAKEY

If you plan on loading formulas then you will probably hit issues if the length is over 80 characters, when the source file is loaded into the repository then be prepared for the following type of failure and error message:

ERROR [AIF]: EPMFDM-140274:Message - ORA-12899: value too large for column "DRMQHJQKBLPQ"."TDATASEG_T"."UD2" (actual: 1173, maximum: 80)

Using the import metadata feature in the planning application there is no problem loading large formulas.

Also, if the formula is split over multiple lines in the source file:


If it is loaded directly through the import metadata feature in the planning application the formula will load successfully, if the same source file is loaded through Data Management the import will be split over multiple records and be completely messed up.


My assumption is that to be able to load formulas the character will need to be under 80 and the formula be on a single row.

Moving on to the entity dimension, I followed the same process and besides the null source value mapping issue the metadata loaded successfully.


This produced the following dimension hierarchy in the planning application.


If I load the source file directly to planning then again you see there is a difference in the ordering.


Because of the way the metadata has been ordered it also caused an issue with shared members occurring before base members.


Now for a custom dimension which is my example is product, I started out with the following source file:


In my source file, I have some level 0 members which have no parent and the aim is to map these to a parent named “P_Unmapped”.

Once again, the process is the same for setting up the integration, there is a slight difference with the target plan type name in the data load rule, the name will be automatically set as “CUSTOM”


As this a custom dimension and there could be multiple, the target dimension name needs to be defined in the load rule target options.


I added a SQL mapping to map the null parent members and assign them to “P_Unmapped”.


The null parent members were correctly mapped after importing the source file.


The export was successful and the hierarchy was built as expected.


So how about Smart Lists, well the concept is the same and I started out with the following source file which contained the definition for one Smart List.


The file could have contained multiple Smart List definitions, I just went for one for demonstration purposes.

The data load rule will have a default target plan type set as “SMARTLIST”


There is no need to set the dimension name in the target options of the rule.

Besides having to map the null source properties there were no other difference to mention.


The Smart List was created in the planning application with valid entries.


If you want the display order to be by ID then it might be worth adding in the entry ID property to the target dimension settings and include in the source file, this way the order will be the way you want it.

Finally, I want to briefly cover off deleting members, if you look the properties available in the dimension details of a target dimension application you will see there is a property called “Operation”.


This property controls whether the record being loaded performs an insert/update or delete, the possible values for this property are:
  • update
  • delete level 0
  • delete idescendants
  • delete descendants
Usually you would not pay attention to the operation property as if not specified the default is to update, if you want to delete members you can use one of the other values.

The aim of this next example is use Data Management to delete descendants of “P_Unmapped” in the product dimension.


For some reason, the custom dimension does not include the operation property by default so I added a new property.

So I wouldn’t affect my original product metadata load I added a new dimension with a prefix and enabled only the “Parent” and “Operation” properties.


The source file contains the member and the operation to delete descendants.


The remaining steps to set up are the same so there is no need to go through them again, the full process was successful.


In the target application, the descendants of “P_Unmapped” were deleted.


In terms of automation the same applies as loading data, the rules could be scheduled in Data Management, if you want multiple rules to be run they can be added to a batch.

The uploading of the source file, running the rule or batch can be carried out using EPM Automate, alternatively the REST API can achieve the same results.

Well that concludes my first look into loading metadata through Data Management, I know I have experienced some issues with the functionality but that could be because it is so new and I have not had the time to investigate, if anything changes or is factually incorrect I will update this post.

10 comments:

  1. Great work John!

    Pity about the minor bugs - the ordering one in particular. Slightly surprising they didn't just use the standard planning load flag rather than attempting to reorder in the sql.

    Most exciting option is the ability to put basic sql logic into the metadata loads - saves needing to do it against a text flat file!

    Cheers,
    Pete

    ReplyDelete
  2. Nice dive into the new feature. I really think Oracle should leave this in our hands before it is released :-)
    Great work

    ReplyDelete
  3. Hi Francisco,


    I have the same problem. When I execute the integration the order of the members is altered and the problem with the shared members is produced, is there any possible solution to this?

    Thanks,

    Solange

    ReplyDelete
  4. Hi John,

    Awesome post, what happens if the members to be deleted exist in application artefacts ?

    Thanks

    ReplyDelete
  5. Thanks John.

    Do you know if you can use this functionality to load metadata to PCMCS?

    Thanks!

    ReplyDelete
  6. I don't believe that is currently possible.

    ReplyDelete
  7. I guess you could use this to load metadata in generation / level "slices", the only way I can think of to get around the ordering issue.

    Nice post John, as ever really useful!

    ReplyDelete
  8. GOOD pOST, YOU Are the real hero of the HYPERION

    ReplyDelete
  9. Any way we can add members in Attribute dimension using this method?

    ReplyDelete
  10. It is not currently possible to load attribute dimensions, apparently planned for a future release.

    ReplyDelete

Note: only a member of this blog may post a comment.