Sunday 30 November 2008

ODI Series – Loading data into essbase

Today I am going to look at loading data into essbase using ODI, now if you have followed previous blogs such as loading metadata then it is pretty much along the same lines except using a different knowledge module. (LKM Hyperion Essbase DATA to SQL)

I am also going to go into a more detail on how the API loads data into essbase and what to be aware of.

To start with you need your source and this time I am going to use a pre-populated database table, the process is nearly identical if you are going to use a flat file as a source obviously you just need a different source DataStore.



As always the next step is to create a reversed DataStore of the db table.



Once you have selected the table and then clicked reverse it should create the new DataStore.



The target is going to be the essbase database Sample.Basic, the DataStore was created in a previous blog

The interface can now be created.



In the diagram tab, drag the SQL table DataStore to the source and the Essbase data DataStore to the target, as my source column names are the same as the target everything is automapped.



In the flow section make sure you have the IKM set to “IKM SQL to Hyperion Essbase (DATA)

Understanding some of the options are important when using this IKM.

CLEAR_DATABASE :- This will execute the following calculation script commands for block storage cubes (CLEARBLOCK ALL, CLEARBLOCK UPPER, CLEARBLOCK NONINPUT), for ASO cubes cubes you can only select ALL and it will clear all data.

MAXIMUM_ERRORS_ALLOWED :- If this is set to 0 it will ignore any errors such as a member not found and keep trying to load.

COMMIT_INTERVAL :- This number will be the amount of records of data that are sent to essbase in a chunk. Setting this amount to a large value can cause performance issues if you have any records that get rejected, I will go into this in more detail shortly.

RULES_FILE :- You don’t have to use a rules file with this IKM but an important thing to note is that without a rules file the data will always be sent as “overwrite” and you will need to use a rules file if you want to add to existing data.



Executing the interface will then load the data into essbase, any data which couldn’t be loaded will be logged in the error log file.

This is all fine but recently there was a post about the performance of using ODI to load data into essbase, as I always want to know how it all works and I am never satisfied until I know so I went about looking into behind the scenes of the data loading.

When an interface is run it calls upon a specific knowledge module depending on the integration, the hyperion KMs then call on Java classes to do all the work, these classes are contained in the jar files in the oracledi\drivers directory.

The classes also rely on the essbase Java API. When a dataload is run a chunk of the source data is grabbed, the amount of data in each chunk depends on the value you have set in the COMMIT_INTERVAL option, the default being 1000 records.

What happens now is the data is streamed in using the API until all the records have been processed and the data is then committed to essbase. This process is repeated until all the chunks of source data have been processed.

The methods used in the Java API are :-

beginDataLoad - which starts the update to the esssbase database.

If you look in your essbase application log file you can see when this command has been called.

Received Command [StreamDataload] from user [hypadmin]

sendString - which sends a string of data to the essbase database.

endDataload - commits all the data that has been sent from each sendString method.

This process works fine unless there are some invalid records e.g. records with members that don’t exist in the dimension it is being loaded against.

The data will not be rejected until the endDataload method is called, so say you have the commit interval set to 1000, 1000 records of data will still be sent to essbase and it will be all rejected if any of it is invalid.

I updated my source data to send a record of “Florida1” instead of “Florida”, this is what you will see in the essbase app log.

Sun Jan 11 13:17:18 2025]Local/SAMPLE/Basic/hypadmin/Error(1003014)
Unknown Member [Florida1] in Data Load, [999] Records Completed

[Sun Jan 11 13:17:18 2025]Local/SAMPLE/Basic/hypadmin/Error(1003050)
Data Load Transaction Aborted With Error [1003050]

To compensate for this the error in the data the code then tries to load each record one by one from the start of the chunk until it has processed all the records in the chunk.

So if you have the commit interval as 1000 then it will try and load each of them 1000 records one by one until it has processed the 1000 records and then goes back into the stream loading of the next chunk, if it hits another error then it will go into single record update again.

If you look at the log the interface produces you will see if enters single record loading.

2025-11-01 14:44:51,375 DEBUG [DwgCmdExecutionThread]: Error occured in sending record chunk...Cannot end dataload. Analytic Server Error(1003014): Unknown Member [Florida1] in Data Load, [13] Records Completed

2025-11-01 14:44:51,375 DEBUG [DwgCmdExecutionThread]: Sending data record by record to essbase

This adds an extremely large overhead in the processing time and it is a really inefficient way of processing data, hopefully one day there will be some updates to the Java code that take into account some of these issues. Until then it is something to watch for and you will have to play around with the commit interval to find the fastest processing time or try and make sure your data is clean.

Hopefully that gives you an insight into what is happening when loading data, you should also be able create your own essbase dataload interfaces.

Friday 21 November 2008

ODI Series - Planning and multiple UDAs

I thought I would post a quick blog about loading multiple UDAs in planning with ODI, if you have used HAL in the past then you probably already know but I have seen the question asked a few times, I am also going to demonstrate how to load UDAs from a multi column source.



In the example above I have two UDAs that I can use and say I want to apply them both to one member using ODI then the answer is very simple, in your source all you need to do is separate each UDA with a comma.



The thing to remember if you are using a csv flat file as the source is that the UDA details will be enclosed in quotations as they contain a comma



This means that in your flat file source DataStore you will need to add in the text delimiter into the configuration.



In your interface just map the UDA and you are done.



And in planning it should of applied both UDAs to the member properties.



So what if you have a source that has separate fields for each UDA, the fields for each record will not always be populated e.g.



Now because you need to separate each UDA with a comma you can’t apply the standard logic and just concatenate the fields, if you did then you would have something like.

Record 1 – UDA = “ODI_UDA1,ODI_UDA2”
Record 2 – UDA = “ODI_UDA1,”
Record 3 – UDA = “,ODI_UDA2”

If you tried to load them into planning records 2 & 3 would fail.

To get around this is in ODI you need to create an expression on the target.



Click the “Launch Expression Editor” icon.



The following functions are going to be used

CASEWHEN(<condition>, <truepart>, <falsepart>)

LENGTH(<string>)

CONCAT(<string1>, <string2>)

To try and put it into words what we are trying to achieve is :-

Case 1 - If the length of the value in UDA1 is greater than zero and the length of the value in UDA2 is greater than zero then concatenate UDA1 with a comma and UDA2.

e.g. ODI_UDA1, ODI_UDA2 will equal “ODI_UDA1,ODI_UDA2”

CASEWHEN(<condition>,

The condition equals LENGTH(ACCOUNT.UDACOL1)>0 AND LENGTH(ACCOUNT,UDACOL2)>0

So now we have CASEWHEN(LENGTH(ACCOUNT.UDACOL1)>0 AND LENGTH(ACCOUNT,UDACOL2)>>0,

The <truepart> can be expressed by

CONCAT(CONCAT(ACCOUNT.UDACOL1,','), ACCOUNT.UDACOL2),

The <falsepart> is defined by looking at the next possible combination of UDAs.

Case 2 - If the length of the value in UDA1 is equal to zero and the length of the value in UDA2 is greater than zero then use the value in UDA2.

e.g. “<blank>,ODI_UDA2” will equal “ODI_UDA2”

We will need to use CASEWHEN( again there are more possible combinations

<condition> is LENGTH(ACCOUNT.UDACOL1)=0 AND LENGTH(ACCOUNT.UDACOL2)!=0,

<truepart> equals ACCOUNT.UDACOL2

Once again the <falsepart> is defined by looking at the next possible combination of UDAs

Case 3 - If the length of value in UDA1 is greater than zero and the length of the value in UDA2 is zero then use the value in UDA1.

e.g. “ODI_UDA1,” will equal “ODI_UDA1”

Final time to use CASEWHEN(

<condition> is LENGTH(ACCOUNT.UDACOL2)=0 AND LENGTH(ACCOUNT.UDACOL1)!=0

<truepart> equals ACCOUNT.UDACOL1

<falsepart> will be defined by

Case 4 - Don’t use any value, which can easily be represented by ‘’

If we put it all together this produces



And that’s it, this works quite well in situations where you only have a few UDAs to play with though if you have quite a lot I would recommend trying to find an alternative solutions

Next time I will be looking at loading data into essbase using ODI.

Sunday 16 November 2008

Applying EPM maintenance release 11.1.1.1 to 11.1.1.0

Well it is here the next release of EPM is available from edelivery; I have been wondering why no patches have been released in metalink3 to combat all the bugs in version 11, obviously Oracle decided to wait and release all the fixes together.

Looking through the readmes it looks like they have resolved some of the major issues especially with the configuration, I am not going to dwell on the fixes as there are so many and I am sure you will enjoy sifting through them all. I just wish Oracle would put together an easier way of going through all the different releases and patches to find out what gets fixed when and why, maybe they could try putting it in a database or is that metalink3 is meant to be?

It looks like the documentation has been updated at :- http://download.oracle.com/docs/cd/E12825_01/index.htm

From the documentation it says you can move directly from 11.1.1.0 to 11.1.1.1 by using the “Apply maintenance release” option, so no time like the present to give it a go.

First of all, I will have to download all the installation files again, which may take a while.

Once all the files are downloaded extract them into the same folder and you should have all the product components inside the assemblies folder.



Make sure all the EPM windows services are not running or you will no doubt enter problems, double click InstallTool.cmd



The Installer should give you the option to “Apply the maintenance release”



I chose to “Choose components by tier” and kept everything selected, as I want to update every product I already have installed.



The install types will be all set to Maintenance, the next screen will start the maintenance upgrade, this is a good time to go and do something far more interesting for a while than watching a % complete screen.



On your return you should hopefully be greeted with a summary screen with everything completed successfully.

Ok, on to the configuration, the components that need reconfiguring should be highlighted once the configurator is running, this will be mainly re-deploying the web application servers.



From the documentation

“If you are applying the maintenance release, you do not need to configure the database, and
registration with Shared Services is not performed. (Essbase Studio is an exception. For Essbase
Studio, configure the database if you are applying the maintenance release, and select “Connect
to a previously configured database.”)"

You will need to manually tick the “Configure Database” for Studio as it will not be selected.



The web server components should be all populated.



Once again the workspace components should be all pre-populated picking up the details from your previous install.



After that the configurator will redeploy the application servers and if selected configure essbase studio database.

Once completed it will annoyingly start up many of the windows services and set the startup type to automatic even if you previously had it set to manual.

After a reboot I had a quick scout around to see if everything was working as expected, all looked pretty good.

If you have planning applications you will need to log into each application and upgrade it to 11.1.1.1



Well that went really smooth and Oracle are definitely moving in the right direction for applying maintenance releases, I know people have had issues installing and configuring 11.1.1.0 but maybe they will have more success with this release.

I just hope it is going to be like this when Oracle start releasing patches for EPM and we don’t have to go through all the hassle like with previous versions.

Sunday 9 November 2008

ODI Series - Loading SQL Metadata into essbase and its problems

I was going to move on to essbase data loading with ODI but a couple of questions cropped up on the forums this week and I wanted to cover them off first.

The question was around loading essbase metadata from a database repository and some of the rule separator options not working.

Seeing as I went through loading metadata from a file source in the last blog I am just got to update the interface I used but link it to SQL DataStore.

First all I had to create a database table to mirror the flat file structure and populate it with data.



Next I reversed the database table in ODI



And then updated the interface to the use the SQL DataStore instead of the flat file DataStore.



Now I used the same rules file that the delimiter was set to comma (,) and in the interface the RULE_SEPERATOR set to (,).



Running this interface causes no issues but the question that was brought up on the forum that when you create an essbase rules file and use a sql load then delimiter is set to tab.



In the “IKM SQL to Hyperion Essbase (METADATA)” options for the RULE_SEPARATOR it says you can use Comma, Tab, Space and custom characters.



Ok, lets change the separator to Tab in the LKM and the essbase rules file.



Run the interface and we would expect no problems but the interface fails, if you look in the outputted log file.



And if you change to Space and run the interface again once again it fails.
Running it for any other custom characters is fine the problem just arises with Tab and Space.
So it doesn’t work with them end of story, well I don’t like leaving things in that way and want to know the reason why.

I should have just left it there because it has taken a fair bit of digging into the Java code which ODI runs when the interface is executed.

In part of the JAVA code the values that have been entered for the RULE_SEPERATOR are converted.



So if you enter comma it is converted to “,”, tab is converted “\t” and space is converted to “ “.

All seems fine there but stepping through the code later on there is a validation check on the value in the ruleSeparator variable.



What this means is that the variable is trimmed to remove any trailing whitespaces and checks if the trimmed value equals “”, if you have chosen tab or space then it is also going to remove all the whitespaces so will equal “” and then throw an error.

This is obviously a bug then but can it be fixed? Yes, and very easily.

All I needed to do was remove the trim() from the code so it would only fail if the rule separator was left blank



I compiled the class (ODIEssbaseMetaWriter.class) and updated the jar file (odihapp_essbase.jar)
Now running the interface I can set the separator to Tab or Space without any issues.

I can also answer the question why do you have to use an essbase load rule, if are using a database table as a source you would expect the JAVA code to handle the integration of the metadata but unfortunately it is not very sophisticated.

What happens when you run the interface it creates a temporary file that it then uses with the load rule to load in the metadata, so it is converting the database records to a text file or if you have a flat file source it will also create a temporary file in the correct format for the load rule.



Not the most efficient way of getting the data into essbase and is also a warning if you have large amounts of metadata to load, for one it is going to create a large temporary text file in your temp directory and is not going to be the quickest integration.

I also noticed something else, I am using version 11.1.1 essbase and using the ODI essbase drivers for 9.3.1, I have not seen any 11.1.1 drivers for ODI so I have no choice. In the IKM there is an option RESTRUCTURE_DATABASE but when I look at the log after running the interface I see :-



Versions prior to 9.3 but I am using 11.1, another hunt around in the JAVA code explains why this occurs.



Basically if the version is above 9 and less than .3 the warning will always happen, not to worry though it is only a warning and doesn’t stop the restructuring, it also can be fixed easily in the code.

One final thing I spotted and I am not clear why this is happening yet, in my source file/database there is a field for Alias.



If I use EAS and load the metadata in then all the aliases are loaded into the outline, if I use ODI with the same source and same load rule the alias for the first member is not loaded, so in this case member 500 never gets the alias loaded. I am not sure if anybody else has encountered this?

Well I am going to leave it there for today as once again it has taken a lot longer than expected to produce.

Sunday 2 November 2008

ODI Series - Loading Essbase Metadata

On one of my previous blogs I went through the steps to extract metadata from essbase, so today I will cover the opposite and load metadata into essbase. Essentially it is the same sort of process except we are swapping over the source and target and changing the Knowledge Modules we need to use, though I thought that you always need to add a little more information into each blog so I am going to bring attributes into the equation as well.

If you have not read my blog on extracting essbase metadata then I advise you have a read through it if you are not familiar with ODI as I am going to assume you are. I am also going to use some of the same models and configurations I have used in the past.

I am going to be using the Data Server and Physical schema I have set up previously connecting to the Sample.Basic database.



The basic database has five attribute dimensions.



The first thing to do is to create a model to hold the reversed essbase database; if you already have a model you can use the same one and run the reverse again, the important difference this time is that we are going to set an additional option for the RKM.



If you set EXTRACT_ATTRIBUTE_MEMBERS to “Yes” when the reverse is initiated it creates extra columns in the dimensions which are associated with attributes, e.g Product will have columns for Caffeinated, Intro Date, Ounces, Pkg Type.



So now we have our reversed model we need some source metadata to try and load into the essbase outline.



I created a CSV file that will load a new branch to the hierarchy and set an attribute for each new level 0 member, the file was saved into the location where the file technology is pointing to (in my case E:\FileStore)

A DataStore was created using the flat file model, the file format was set to Delimited, heading set to 1 and field seperator set to comma.



The DataStore was then reversed to create the column information for the flat file.



So that’s the source and target done, time to create the interface.

The interface was names and the staging area set as “SUNOPSIS MEMORY ENGINE’.
In the diagram area the flat file DataStore was dragged on to the source and the essbase DataStore dragged onto the target, most of the columns were automapped except for the columns with spaces in such as “Pkg Type” and “Intro Date” so these were manually set up.



In the flow area, the LKM was set to “LKM File to SQL” as we are loading from a flat file.

The IKM was set to “IKM SQL to Hyperion Essbase (MetaData)”



There are a few options to be aware of which have not appeared in previous ODI blogs

RESTRUCTURE_DATABASE has four settings and is very much the same options you have when you make changes to an outline in EAS, most of the time I would think this is likely to stay as the default : KEEP_ALL_DATA.

PRE_LOAD_MAXL_SCRIPT & POST_LOAD_MAXL_SCRIPT gives you the ability to run maxl scripts before and after executing the interface.

At this point you may think that it is alright to execute the interface, unfortunately not so fast, the current IKM is not aware of many options for loading metadata such as your dimension build properties Allow Moves, Allow property changes to name a few, this means you have to create a load rule the same as you would if you were loading metadata into essbase.

I can understand some people saying well what’s the point of using ODI if I need to use a load rule, well ODI is all about integrations and not just doing one thing like a metadata load, saying that though I do hope that in future IKMs for essbase they start to add in more options and have the choice of using a load rule or not.



A load rule was created and the name added to the options of the interface, if the load rule exists in the essbase app directory then the name is only required, if the rule file is elsewhere you need to enter the full path details to it)



No problems executing the interface.



In the essbase outline all the new members were added and the attribute information correctly associated.

Well that’s metadata loading and extracting now covered off so next I will be moving on to data, until then enjoy.