Sunday, 28 February 2010

ODI Series - Loading HFM metadata

In the previous blog I went through getting up and running with the HFM ODI adaptor, today I am going to start looking at the different load methods available with the four available Knowledge Modules.

Through the use of the Knowledge modules you will be able to do the following :-

  • Load metadata and data
  • Extract data
  • Consolidate data
  • Enumerate members of member lists
The Knowledge modules basically replicates the functionality that is available in the HFM desktop client, if you have ever used the client then you be comfortable using the ODI adaptors 



The objective today is load metadata information from a flat file into the Account dimension of an HFM application, if you have loaded metadata using any of the other Hyperion adaptors then you will see no difference in using the HFM adaptor and the process is straightforward, in fact this example I am going to go through couldn’t get much simpler.

You will need to make sure you have imported in the KM - IKM SQL to Hyperion Financial management Dimension.



With the KM you are able to load metadata in dimensions Account, Currency, Custom 1-4, Entity and Scenario.


As I will be loading from a flat file you will also need to have imported in the KM – LKM File to SQL

Currently I have the following hierarchy in my sample HFM application (this hierarchy was loaded from the Simple Demo files that are available in the HFM installation directory)




In my example I want to load the following expense members as a child of Account.



If you have a look at the Account datastore in the Designer you will see all the available properties that can be set when loading in metadata.



I am not going to go through what each of the columns means, as this is all available in the HFM documentation, if you have a read here you will find the details for each of the properties on the Account dimension, the documentation also contains information on the other dimensions.

Now I created a file template with the exact same column headers as the datastore columns, I find it easier to match the column headers as when you have auto mapping enabled in ODI it will set all your target mapping for you without having to manually set them up. You don’t have to create a template with all the columns if you don’t like, most of the time you might never use some of the columns so it is up to you if you include them in your file.



The file was populated with all the metadata information.



Once you the file is completed then it is on to defining the file model and datastore in the designer.




Create a new model of file technology and use a logical schema that points to the correct location of where the file will be stored. I am using a logical schema set up in previous blogs and just points a directory where most of the flat files I use are stored.




Next a new DataStore is inserted under the model, the file is manually selected using the browse functionality as the file is sitting on the same machine as where ODI is installed.



If you looked at the image of the file I am using you will notice it was a CSV and has heading so I set the file format as Delimited, the Heading number of Lines is 1 and the field separator is set to comma.




Reversing the file returns and stores all the columns in the datastore against the header columns in the csv file. After reversing you may need to change length of the columns, I have just left them set to the default for the example.

I usually set flat file types to String and then if I need to convert them to a different type I do it on the mappings of an interface, I do this because I find I have more success and encounter less when loading from files.

Right on to creating the interface to load the load from the file into the HFM application.



I am just using the SUNOPSIS_MEMORY_ENGINE as the staging area, this is because the volume of data from the flat file is very small, I would think that when loading metadata into a HFM then the volume would usually be pretty low.

You will always need to set a staging area as the target (HFM) can not be used to transform data as it has not SQL capability.



Drag the file source datasource on to the source of the diagram.



Drag the HFM Account DataStore on to the target of the diagram, as the columns on my source match the target all the columns are auto-mapped.

The warnings are just done to column length difference between source and target; if your source data is consistent then you don’t need to worry about it otherwise you could substring the length of the data in the target mapping or change the length in the source file datastore.



The default settings are kept for the LKM.



The IKM only has a few options

CLEAR_ALL_METADATA_BEFORE_LOAD - All dimension members and corresponding data, journals, and intercompany transactions in the application database are deleted.

REPLACE_MODE -

No= Merge - If a dimension member exists in the load file and in the application database, then the member in the database is replaced with the member from the load file. If the database has other dimension members that are not referenced in the load file, the members in the database are unchanged.

Yes = Replace - All dimension members in the application database are deleted and the members from the load file are put into the database.


In this example I am just doing a simple load for a few members so all the default settings are kept. I enabled the log and set the full path and filename.



After executing the scenario check the operator to make sure it was successful.



In the log file you will notice another log is generated, the log is the equivalent of what the HFM client produces, it is a validation of the source data.



If the interface fails then the additional log is the place you should check to find out the reason why the validation failed, I have no idea how the numeric code in the log filename relates to anything, it doesn’t look like a timestamp, maybe it is just created so the file will be unique.



In the HFM application you should now be able to select the new members that have been loaded. So there we have it a really simple example of loading metadata into HFM.

As I said in the first HFM blog I am going to keep it to the basics just to get people up and running with the different functionality available with the HFM adaptor.

Sunday, 14 February 2010

ODI Series - Will the real ODI-HFM blog please stand up.

I have always been meaning to write up a few blogs on using ODI with HFM adaptors but I have never got round to it. I have seen more and more questions coming up lately on problems configuring and using the adaptor so I thought it would be a good time to go through the basics of what can be done.

Yes, I have already seen a one off blog about ODI/HFM but to honest I was not impressed with the quality and depth of it so I thought it was time to put my version together, I will let you decide which one to use.

I would first like to make it clear I am not a HFM consultant so apologies if I use any incorrect terminology or don’t go about some of the methods in the correct way, no doubt I will be contacted as usual if I make any slip ups.



I am not setting out to go into too much depth just enough to get people up and running and gain a little knowledge of using the adaptors, my end game really is to be able to move metadata and data between essbase, planning and HFM but we will see how it pans out.



In this first blog I will go through installing the HFM client, configuration in ODI and some pitfalls that may be encountered and hopefully finishing up with a successful reverse of an FM application.



I have wrote a number of articles in the past using the Hyperion adaptors and I am not going to replicate the detail I have in the past so if you don’t understand you may have to read back through my earlier blogs.



I will be using a two-machine architecture, one machine hosting HFM and EPM foundation (Workspace and shared services). I have created a FM application named SAMPLE that is based on the “Simple Demo” information that can be found in “Hyperion\products\FinancialManagement\Sample Apps\Simple Demo\” of the HFM installation.

The second machine will be hosting ODI; both machines are windows based as that is a pre-requisite for HFM and also the HFM client that has to be installed on the ODI machine.

The EPM version installed is 11.1.1.3

Ok, lets start with the installation of the HFM client.
The files you will need (based on 11.1.1.3) are

Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition Release 11.1.1.3.0 Foundation
Hyperion Enterprise Performance Management System Foundation Services Release 11.1.1.3.0 Part 1
Oracle Hyperion Financial Management, Fusion Edition Release 11.1.1.3.0


Extract all the files to the same location and execute “InstallTool.cmd” to launch the installer.



Select “Choose components individually”



Uncheck all, expand Financial Management and select “Financial Management Client”



Should not take too long to install and after installation there is no need to run the configurator.



After installing the client I wanted to make sure I could connect to my HFM instance.

I opened up the FM configuration utility from the start menu > Oracle EPM System > Financial Management > Client Configuration



I added the server name of my HFM instance (HFMV11), enabled DCOM and that was all that was required to be done in the utility.

Next opened up the FM desktop to make a connection to my SAMPLE FM application.



When logging in make sure the Domain box is cleared and don’t enter the HFM server as it will not use Shared Services security and when you try and open the application it will generate a funky error.



I connected to the cluster and the SAMPLE application appeared in the list on application, clicked on Open and no errors were received so that is enough for me to believe the client is working successfully so on to ODI.

Open the Topology Manager

This is where we will define the connection to the HFM server, define the application to use and associate this with a logical schema and context.



Enter a name for the connection, the Cluster name and an account that has been provisioned with a FM admin role in Shared Services.

Do not click Test as this is only for JDBC technologies, the HFM is using an API so clicking it will just generate a driver error, I have lost count the number of times people have clicked the Test button and thought there was a problem.




Manually enter the HFM application name into the Catalog and Work Catalog boxes, make sure you enter exactly the same name into both boxes as I have also seen problems with integrations where this has not been done.



Select a Context which you want to use and associate a logical Schema, you will need to manually type the name of the logical schema you want to use.

If you don’t understand about Logical Schemas and contexts then I suggest you have a quick read here

That is all you need to do in the Topology manager for now so on to the Designer.

I have created a new project for the purpose of this HFM blog.



Once the project has been created the HFM knowledge modules will need to be imported. There are currently five HFM knowledge modules


IKM SQL to Hyperion Financial Management Data
IKM SQL to Hyperion Financial Management Dimension
LKM Hyperion Financial Management Data to SQL

LKM Hyperion Financial Management Members to SQL
RKM Hyperion Financial Management

All the xml based KMs are situated in OraHome_1\oracledi\impexp

I am only going to be using the RKM today to reverse engineer the FM application into an ODI Model.

Right click the project and select Import Knowledge Modules.
Multi-select the KMs I have highlighted above and import.



In the models tab create a new Model, the model is basically going to be an ODI interpretation of the structure of the HFM application broken down into manageable Datastores.



Give the model a name and select “Hyperion Financial Management” and select the Logical Schema you have just defined, this will create the link to the physical connection information for the FM application.



In the reverse tab, select customized, select the context and logical agent you want to use to do handle the reversing, you can choose a local agent if you have not created another agent, if you want to understand how to create agents look back to one of my earlier blogs.

You will need to the select the RKM that was imported earlier, I have left the other options default for now, I will go into them in more detail at a later stage.

Click Reverse and go into the operator. This is where you may encounter some issues.



You may receive an error “Error occurred while loading driver”, this is due the agent not being able to see the ODI HFM driver (HFMDriver.dll / HFMDriver64.dll) in the \oracledi\drivers directory.

Depending on what sort of agent you are using will determine what you need to do to resolve this issue.


If you are using the “Local Agent” you will need to add the driver location to the windows environment variable (Path)



Once you have added the location make sure you restart all the ODI components to pick up the driver or you will continue to receive the error message.

If you are using an agent created as a windows service you will have to update a configuration file, when the windows service agent is started up it retrieves some of its parameter information from OraHome_1\oracledi\tools\wrapper\conf\snpsagent.conf



Edit the file and add the following line below “wrapper.java.library.path.1=….

wrapper.java.library.path.1=./drivers



This means the DLLs in the drivers directory will be picked up.

Restart the agent once the changes have been made.



Reverse again.



If you using EPM version 11.1.1.3 or above like I am then you will encounter the error 

“Error occurecd in driver while connecting to Financial Management application [xxxxx] on [xxxxx] using user-name [xxxxx]..."

Don’t panic it is a known bug.


8725017: IKM SQL to HFM - Is not compatible with Oracle's Hyperion Financial Mangement 11.1.1.3.0. NOTE: Oracle’s Hyperion Financial Management – Fusion Edition Release 11.1.1.3 is the only release supported for this patch.

Tbere are currently a couple of solutions to fix the problem, you could download and install the following patch from “My Oracle Support”


Patch 8785892 - ORACLE DATA INTEGRATOR 10.1.3.5.2_01 ONE-OFF PATCH

Or you could install one of the following later patches as they address another issue with using HFM adaptor with ODI
9201073 - Cannot load metadata that includes shared members into HFM 11.1.1.3

Patch 9327111: ORACLE DATA INTEGRATOR 10.1.3.5.6 CUMULATIVE PATCH Or Patch 9285774: ORACLE DATA INTEGRATOR 10.1.3.5.5_01 ONE-OFF PATCH

I opted for patch 9285774, it is really simple to install, stop all ODI components and agent, extract the three files in the patch to \oracledi\drivers overwriting the existing files and start up the ODI components again.



Reverse once again and this time it should be successful.



Check the model and make sure all the Datastores have been created.



That concludes the first part and should get you up and running with the adaptor, in the next blog I will start looking in more detail at using the available FM KMs.