There are a couple of updates to mention in regards to ODI, firstly a new patch is available from metalink3 - 10.1.3.4.7_01, .7 was released and then quickly .7_01 because of an issue adding projects.
The patch addresses two issues with using essbase.
7525003: Moving large amount of data from Essbase to Oracle generates timeout error.
7520405: Issue extracting large or long queries against Essbase KM.
It always worth moving to the latest patch release and installing the patches is simple it just involves copying over a new set of files.
The other update to mention is that ODI 10.1.3.5 has been released, looking through the release notes there have been a few new additions that include a new lightweight designer, which allows users to modify the mappings of an interface through a web browser, it is something I am interested in looking at even though there was lightweight designer in the previous release and I have never got round to looking at it yet, anyway I will probably cover it in a future blog.
Interesting in the release notes it points out that the Hyperion KMs (Hyperion Essbase, Hyperion Financial Management, and Hyperion Planning) are certified for Hyperion 11.1.1
As I have been using ODI with version 11 and hit upon a few problems this is good news and something I am going to test, it will be interesting to see if that have actually made any major updates to the Java behind the KMs. You can view the release notes here.
Well as it has been released I might as well go through and try and upgrade my existing ODI installation and see if I hit upon any issues.
Before you upgrade it is important to back up the existing installation directory of ODI, if you have a service running the agent make sure you stop it before copying the directory, the database repositories master/work will also need backing up.
When you have backed up extract the software and run \setup\Windows\setup.bat. choose Deinstall Products and remove ODI 10.1.3.4.0
Once it has been removed just follow the installation screens, installing is the same as previous versions and I have already been through the 10.1.3.4 install in a previous blog.
You will now have to restore from your backup \oracledi\bin\odiparams.bat and anything with .xml or .layout file types, most of them may have been left after uninstalling.
Next to upgrade the master repository, Start Menu, select Programs > Oracle Data Integrator > Repository Management > Master Repository Upgrade, you should be able to choose your work existing work repository from login dropdown.
To upgrade your work repository, open the topology manager, select the repositories tab, right click your work repository and select upgrade.
Your current projects will include the old Knowledge Modules, you can continue to use them without any issues or you can choose to import them again, I don't think you will need to replace any of the Hyperion KMs as none of the descriptions or dates seem to be updated so they are probably exactly the same and only the Java drivers have been updated.
If you are using an agent start it up again and that should be the upgrade completed.
I just had a quick look to see if Oracle are correct in saying it is certified for 11.1.1, it is true they have tidied up some of the code but I still notice there is going to be a problem if you try and use a calc script to extract data. In the ODIEssbaseDataReader.class it still checks the version in the following way.
If the version is greater than 9 and less than a point release of 3 then the exception will be raised so for 11.1 users it will be raised and the integration halted, not 11.1.1 certified then.
I am also assuming that the issues fixed in the patch .07 for the previous version of ODI are included in this new release, I am not sure if there is an easy way of finding out though...
A delve into the disturbing world of EPM, uncovering the mysteries and facts beneath the shiny surface, yawn your way through yet another blog, let's hope this one is interesting...
Sunday, 21 December 2008
Monday, 15 December 2008
ODI Series - Essbase to Planning
I have been meaning to write this blog for a while now though I thought it was probably best until I had covered the main areas for essbase/planning.
You may be in a situation where you have an essbase cube and you want to extract part of a dimension and load this into planning, you could use the same source as the essbase cube does and use ODI to load it straight into planning but there may be occasions when it is best to run the extract from essbase to planning in one go.
In this blog I am going to assume you have read previous blogs and are up to speed with extracting and loading, if you want a recap then you should have a look (essbase extraction)
and (sql to planning)
The aim of this exercise to extract part of an account structure in an essbase db and load it into planning.
When you extract metadata from essbase the outputted format is different than what planning uses so you can’t just do a straight extract from essbase and push it into planning, there are a number of ways of achieving the goal and I will try and touch on a couple of them.
I am going to extract all descendants of Profit from the Sample.basic database; I updated some of the properties of the members just for this exercise.
In the planning sample application I added a new member called profit, the members extracted from essbase will be loaded as a child of profit.
Make sure you have a reversed essbase database and planning application, I am going to use the ones I have used in previous blogs.
Now I am going run through the process in stages, the first stage I am going to extract the metadata from essbase and load it into a temporary table, I am going to get ODI to create and manage the temporary table.
First I created the interface
In the diagram I dragged the reversed measures dimension onto the source.
To create a temporary target table, you just have to select the target DataStore and enter a name for it.
Once it has been created to add the columns and mappings to target just right click over a column in the source and select “add to target”.
Once you have gone through all the source columns and added them you should end up with something like.
I made a couple of additions to the target mappings, the first was for TwoPassCalc, when you extract from essbase if the member is set as Two Pass Calc then it will return a ‘T’, planning will be expecting a 1, to get around this I used the replace function to replace anything with T to a 1.
The same sort of situation arises with Variance Reporting, essbase extraction returns ‘E’ if it is set to expense and planning is expecting ‘Expense’, I used the replace function again.
Which ended up with a target DataStore like :-
In the flow diagram the LKM was set as “LKM Hyperion Essbase METADATA to SQL”
As I am going to extract all the members below profit the options were set to:-
MEMBER_FILTER_CRITERIA = Descendants
MEMBER_FILTER_VALUE = Profit
On the target the IKM used was “IKM SQL Control Append”
The options that were set were :- FLOW_CONTROL – the default is yes but for this integration to work it has to be set to No.
DELETE_ALL – set to Yes so each time the interface is run all the records will be deleted.
CREATE_TARG_TABLE – set to Yes, now it will only need to create the temporary target table the first time the interface is run but it doesn’t matter keeping it set to Yes it will just output a warning that the table already exists on future runs of the interface.
When you create an interface that contains a temporary target table the colour of the interface icon will change from blue to yellow.
Executing this interface will extract the metadata from essbase and load it into a virtual temporary table in ODI.
What we need to do next is load the information from the temporary table into planning, now some of the columns of data are still not in the correct format for planning, these are DataStorage, TimeBalance & TimeBalanceSkip, they all extract from essbase one letter and planning needs the full name. I could use a function to change the value into the correct format for example.
This would achieve what I after but it becomes messy and difficult to manage if you want to use it in other interfaces, so what I did was to create mapping database tables to manage this.
One was created for the Data Storage, mapping the essbase extract value to the planning value.
A table was created for Time Balance mappings and one for Time Balance Skip mappings.
I reversed each of these database tables in ODI so they can be used whenever required in interfaces.
The interface to manage moving the data from the temporary table to planning was created. I dragged the planning accounts DataStore on to the target and the source I dragged the yellow temporary table interface on to the source, any columns that could be were auto-mapped.
Next I mapped all the other columns that have a straight 1:1 mapping like ParentName, MemberName etc.
Ok, I have to use the mapping tables previous created to return the correct value for planning, first I dragged the Data Storage mapping table onto the source and from the temporary table I highlighted DataStorage and dragged it on to the EssDataStorage column of the mapping table, this creates a join between the two tables on them columns.
Now that the join has been created in the target I can use the mapped planning column “PlanDataStorage”
So basically if say the essbase temporary extract table contains ‘N’ for a data storage record, it will map the ‘N’ to the data storage mapping table and return the value in the planning mapping column which would be ‘Never Share’
Next I dragged the Time Balance mapping table onto the source and created a join between the two tables and used the planning column for the target mapping.
As Time Balance doesn’t always have a value set, the temporary table can contain blanks, this will cause the mapping to ignore blank values and not return the records, to get around this in the mapping you set to bring back all the values even if there is no join.
This will mean if the temporary table does contain blank values it will still return the value.
I repeated the same process for the final table “Time Balance Skip”
One final mapping I had to do was for the planning “Account Type”, this is because if “Variance Reporting” is set to “Expense” then the account type will need to be “Expense” otherwise planning will reject the record as it won’t validate, so I set the mapping to be the same value of “VarReporting” as it will only be “Expense” or blank.
In the flow section most of the settings will be standard, all LKMs will be “LKM SQL to SQL” and the IKM will be “IKM SQL to Hyperion Planning”
The only option I had to change was SORT_PARENT_CHILD to Yes this is because the extract from essbase is not going to be exactly the correct order for planning, so setting this option will make sure that everything is in the correct parent/child order, please be aware though if you are updating a lot of members this will slow down the process and may have to look at alternatives.
To run through the process you need to execute the first interface that creates and populate the temporary table and then execute the interface to load and map the values in the temporary table into the planning dimension
The members from essbase and their properties should end up beginning loaded into planning.
You can automate this process by using a package but I am going to leave that until the next time when I will go into more detail.
You may be in a situation where you have an essbase cube and you want to extract part of a dimension and load this into planning, you could use the same source as the essbase cube does and use ODI to load it straight into planning but there may be occasions when it is best to run the extract from essbase to planning in one go.
In this blog I am going to assume you have read previous blogs and are up to speed with extracting and loading, if you want a recap then you should have a look (essbase extraction)
and (sql to planning)
The aim of this exercise to extract part of an account structure in an essbase db and load it into planning.
When you extract metadata from essbase the outputted format is different than what planning uses so you can’t just do a straight extract from essbase and push it into planning, there are a number of ways of achieving the goal and I will try and touch on a couple of them.
I am going to extract all descendants of Profit from the Sample.basic database; I updated some of the properties of the members just for this exercise.
In the planning sample application I added a new member called profit, the members extracted from essbase will be loaded as a child of profit.
Make sure you have a reversed essbase database and planning application, I am going to use the ones I have used in previous blogs.
Now I am going run through the process in stages, the first stage I am going to extract the metadata from essbase and load it into a temporary table, I am going to get ODI to create and manage the temporary table.
First I created the interface
In the diagram I dragged the reversed measures dimension onto the source.
To create a temporary target table, you just have to select the target DataStore and enter a name for it.
Once it has been created to add the columns and mappings to target just right click over a column in the source and select “add to target”.
Once you have gone through all the source columns and added them you should end up with something like.
I made a couple of additions to the target mappings, the first was for TwoPassCalc, when you extract from essbase if the member is set as Two Pass Calc then it will return a ‘T’, planning will be expecting a 1, to get around this I used the replace function to replace anything with T to a 1.
The same sort of situation arises with Variance Reporting, essbase extraction returns ‘E’ if it is set to expense and planning is expecting ‘Expense’, I used the replace function again.
Which ended up with a target DataStore like :-
In the flow diagram the LKM was set as “LKM Hyperion Essbase METADATA to SQL”
As I am going to extract all the members below profit the options were set to:-
MEMBER_FILTER_CRITERIA = Descendants
MEMBER_FILTER_VALUE = Profit
On the target the IKM used was “IKM SQL Control Append”
The options that were set were :- FLOW_CONTROL – the default is yes but for this integration to work it has to be set to No.
DELETE_ALL – set to Yes so each time the interface is run all the records will be deleted.
CREATE_TARG_TABLE – set to Yes, now it will only need to create the temporary target table the first time the interface is run but it doesn’t matter keeping it set to Yes it will just output a warning that the table already exists on future runs of the interface.
When you create an interface that contains a temporary target table the colour of the interface icon will change from blue to yellow.
Executing this interface will extract the metadata from essbase and load it into a virtual temporary table in ODI.
What we need to do next is load the information from the temporary table into planning, now some of the columns of data are still not in the correct format for planning, these are DataStorage, TimeBalance & TimeBalanceSkip, they all extract from essbase one letter and planning needs the full name. I could use a function to change the value into the correct format for example.
This would achieve what I after but it becomes messy and difficult to manage if you want to use it in other interfaces, so what I did was to create mapping database tables to manage this.
One was created for the Data Storage, mapping the essbase extract value to the planning value.
A table was created for Time Balance mappings and one for Time Balance Skip mappings.
I reversed each of these database tables in ODI so they can be used whenever required in interfaces.
The interface to manage moving the data from the temporary table to planning was created. I dragged the planning accounts DataStore on to the target and the source I dragged the yellow temporary table interface on to the source, any columns that could be were auto-mapped.
Next I mapped all the other columns that have a straight 1:1 mapping like ParentName, MemberName etc.
Ok, I have to use the mapping tables previous created to return the correct value for planning, first I dragged the Data Storage mapping table onto the source and from the temporary table I highlighted DataStorage and dragged it on to the EssDataStorage column of the mapping table, this creates a join between the two tables on them columns.
Now that the join has been created in the target I can use the mapped planning column “PlanDataStorage”
So basically if say the essbase temporary extract table contains ‘N’ for a data storage record, it will map the ‘N’ to the data storage mapping table and return the value in the planning mapping column which would be ‘Never Share’
Next I dragged the Time Balance mapping table onto the source and created a join between the two tables and used the planning column for the target mapping.
As Time Balance doesn’t always have a value set, the temporary table can contain blanks, this will cause the mapping to ignore blank values and not return the records, to get around this in the mapping you set to bring back all the values even if there is no join.
This will mean if the temporary table does contain blank values it will still return the value.
I repeated the same process for the final table “Time Balance Skip”
One final mapping I had to do was for the planning “Account Type”, this is because if “Variance Reporting” is set to “Expense” then the account type will need to be “Expense” otherwise planning will reject the record as it won’t validate, so I set the mapping to be the same value of “VarReporting” as it will only be “Expense” or blank.
In the flow section most of the settings will be standard, all LKMs will be “LKM SQL to SQL” and the IKM will be “IKM SQL to Hyperion Planning”
The only option I had to change was SORT_PARENT_CHILD to Yes this is because the extract from essbase is not going to be exactly the correct order for planning, so setting this option will make sure that everything is in the correct parent/child order, please be aware though if you are updating a lot of members this will slow down the process and may have to look at alternatives.
To run through the process you need to execute the first interface that creates and populate the temporary table and then execute the interface to load and map the values in the temporary table into the planning dimension
The members from essbase and their properties should end up beginning loaded into planning.
You can automate this process by using a package but I am going to leave that until the next time when I will go into more detail.
Sunday, 7 December 2008
ODI Series - Extracting data from essbase
Today I will be looking at extracting data from essbase and the different methods that can be used. At first I thought great the knowledge module mixed with the API would extract directly from essbase, as with some of the other integration methods it is not as perfect as it could be.
There are three methods to be able to extract data from essbase; these are using either a Calc script (9.3 onwards), Report script of an MDX query. If your essbase db is BSO then you will be able to use all of the methods, if it is ASO then you will only be able to use the report script or MDX query.
I will first tackle the Calc script method; this method is probably my least favourite due to the way it operates. You first have to create a calc script using the DATAEXPORT function that will extract to a text file.
The DATAEXPORT function writes a dense dimension as the columns so you have to decide how you want your data to look, as it will need to be matched against how you reverse the essbase database in ODI. I am using the Sample.Basic db and have decided to use the Scenario as the dense dimension for the columns.
It is important to use :- DataExportDimHeader On;
As ODI will consider the first 2 records to be header information, the extract file will also have to write to location which will be accessible by ODI, this is not so bad if ODI is on the same server as essbase but becomes more of an issue if they are separate, one of the reasons I am not so seen on this method.
Running the calc script produces the following output :-
First a new model needs to be inserted in the ODI designer, I am assuming you have set up the connection to essbase in the topology manager, if you have not then have a then have a read of a previous blog
In the reverse section, select the context and the logical agent, the KM to use is “RKM Hyperion Essbase..”
To be able to return the correct data columns the KM options have to be updated.
MULTIPLE_DATA_COLUMNS – Set to “Yes” as our exported data has multiple columns.
DATA_COLUMN_DIMENSION – Set to “Scenario” as this data dimension in the exported data file
DATA_COLUMN_MEMBERS – Set to “Actual,Budget,Forecast” as these are the members in the exported data file.
Reversing the cube produces
You will notice the members Actual,Budget & Forecast have been generated in the data DataStore.
The next step is to create a DataStore that will be the target for the exported data; I am going to use a SQL server database table as the target as I couldn’t really see the point of running a calc script to export to a text file and then generating another text file from the export.
I just ran a selective reverse on an existing model to retrieve and store the database table as a DataStore
With the source and target DataStores created the next stage is to create the interface.
As usual the interface was named, the context chosen and the staging area set as Sunopsis Memory Engine.
In the diagram tab the Sample_BasicData DataStore was dragged to the source and SampleDataExtract DataStore to the target as all the columns are the same name everything was auto-mapped.
Within the flow area the LKM to use is “LKM Hyperion Essbase DATA to SQL”
The Options are important to get correct.
EXTRACTION_QUERY_TYPE :- Three possible types CalcScript, MDXQuery or ReportScript, you will need to manually enter the type if you are not using the default ReportScript.
EXTRACTION_QUERY_FILE :- This is the location of the file that will be run to extract the data from essbase, so obviously three file types. For the CalcScript you can enter the fully qualified location of the file or if it resides in the essbase app directory you can just type the name of it without the file type suffix.
For example :- E:\FileStore\extract.csc or extract
EXT_COL_DELIMITER :- this is the columns delimiter that is being used, in my example the calc script is creating a comma delimited file so I just enter ,
EXTRACT_DATA_FILE_IN_CALC_SCRIPT – This only applies if you are using the query type of CalcScript, it is the location of the file created by the calc script., this file must be accessible by ODI.
The other options are self-explanatory and have been covered previously.
The IKM I used was “IKM SQL to SQL Append” with the option of “DELETE_ALL” so the table is cleared out each time the interface is run.
When I ran the interface for the first time it failed with the error:-
Extraction using calculation script is not supported for essbase versions prior to 9.3
I am using version 11.1 and have encountered this error previously when loading metadata into essbase, there are currently no updated versions of the Java files so even though Oracle are pushing to use ODI it is not fully compatible with 11.1.
I wasn’t going to leave it there so tracked down the Java class that generates the error, it is in ODIEssbaseDataReader.class
I updated and compiled the class to not use logic for versions greater than 9, after putting the compiled class back into the jar package the interface ran through without a problem.
What happens when the interface is executed is ODI will run the specified calc script, the calc script will output to a text file, once the export is complete ODI will then load the text file into the database table.
Ok, onto the next type of extraction method Report Script, first step is to create the report script.
Make sure you use the commands {SUPALL}{TABDELIMIT}{ROWREPEAT}{NOIDENTGEN} these will format the output in the way ODI can interpret.
I am going to use the same interface as used in the Calc Script method so I wrote the report to replicate the same output.
In the interface I had to set the options
EXTRACTION_QUERY_TYPE – ReportScript
EXTRACTION_QUERY_FILE – extract or you can put in the fully qualified file path if it is not part of the essbase database.
EXT_COL_DELIMITER - The report is tab delimited so \t is used.
The final method that can be used is an MDX query; I am going to use the same interface so I once again wrote the script to output the script in the same format.
I am sure the MDX could be wrote in different ways but for this exercise it produces the desired results.
In the interface I updated the options
EXTRACTION_QUERY_TYPE – MDXQuery
EXTRACTION_QUERY_FILE – This has to be the fully qualified name of the MDX script
EXT_COL_DELIMITER – MDX output is tab delimited by default so \t was used.
So there we have it a brief look into the different ways you can extract data from essbase, each one has its own merits and drawbacks.
That concludes looking at what can be done with the KMs for essbase and planning, next I am going to move onto the other areas of interest in ODI such as packages and variables.
There are three methods to be able to extract data from essbase; these are using either a Calc script (9.3 onwards), Report script of an MDX query. If your essbase db is BSO then you will be able to use all of the methods, if it is ASO then you will only be able to use the report script or MDX query.
I will first tackle the Calc script method; this method is probably my least favourite due to the way it operates. You first have to create a calc script using the DATAEXPORT function that will extract to a text file.
The DATAEXPORT function writes a dense dimension as the columns so you have to decide how you want your data to look, as it will need to be matched against how you reverse the essbase database in ODI. I am using the Sample.Basic db and have decided to use the Scenario as the dense dimension for the columns.
It is important to use :- DataExportDimHeader On;
As ODI will consider the first 2 records to be header information, the extract file will also have to write to location which will be accessible by ODI, this is not so bad if ODI is on the same server as essbase but becomes more of an issue if they are separate, one of the reasons I am not so seen on this method.
Running the calc script produces the following output :-
First a new model needs to be inserted in the ODI designer, I am assuming you have set up the connection to essbase in the topology manager, if you have not then have a then have a read of a previous blog
In the reverse section, select the context and the logical agent, the KM to use is “RKM Hyperion Essbase..”
To be able to return the correct data columns the KM options have to be updated.
MULTIPLE_DATA_COLUMNS – Set to “Yes” as our exported data has multiple columns.
DATA_COLUMN_DIMENSION – Set to “Scenario” as this data dimension in the exported data file
DATA_COLUMN_MEMBERS – Set to “Actual,Budget,Forecast” as these are the members in the exported data file.
Reversing the cube produces
You will notice the members Actual,Budget & Forecast have been generated in the data DataStore.
The next step is to create a DataStore that will be the target for the exported data; I am going to use a SQL server database table as the target as I couldn’t really see the point of running a calc script to export to a text file and then generating another text file from the export.
I just ran a selective reverse on an existing model to retrieve and store the database table as a DataStore
With the source and target DataStores created the next stage is to create the interface.
As usual the interface was named, the context chosen and the staging area set as Sunopsis Memory Engine.
In the diagram tab the Sample_BasicData DataStore was dragged to the source and SampleDataExtract DataStore to the target as all the columns are the same name everything was auto-mapped.
Within the flow area the LKM to use is “LKM Hyperion Essbase DATA to SQL”
The Options are important to get correct.
EXTRACTION_QUERY_TYPE :- Three possible types CalcScript, MDXQuery or ReportScript, you will need to manually enter the type if you are not using the default ReportScript.
EXTRACTION_QUERY_FILE :- This is the location of the file that will be run to extract the data from essbase, so obviously three file types. For the CalcScript you can enter the fully qualified location of the file or if it resides in the essbase app directory you can just type the name of it without the file type suffix.
For example :- E:\FileStore\extract.csc or extract
EXT_COL_DELIMITER :- this is the columns delimiter that is being used, in my example the calc script is creating a comma delimited file so I just enter ,
EXTRACT_DATA_FILE_IN_CALC_SCRIPT – This only applies if you are using the query type of CalcScript, it is the location of the file created by the calc script., this file must be accessible by ODI.
The other options are self-explanatory and have been covered previously.
The IKM I used was “IKM SQL to SQL Append” with the option of “DELETE_ALL” so the table is cleared out each time the interface is run.
When I ran the interface for the first time it failed with the error:-
Extraction using calculation script is not supported for essbase versions prior to 9.3
I am using version 11.1 and have encountered this error previously when loading metadata into essbase, there are currently no updated versions of the Java files so even though Oracle are pushing to use ODI it is not fully compatible with 11.1.
I wasn’t going to leave it there so tracked down the Java class that generates the error, it is in ODIEssbaseDataReader.class
I updated and compiled the class to not use logic for versions greater than 9, after putting the compiled class back into the jar package the interface ran through without a problem.
What happens when the interface is executed is ODI will run the specified calc script, the calc script will output to a text file, once the export is complete ODI will then load the text file into the database table.
Ok, onto the next type of extraction method Report Script, first step is to create the report script.
Make sure you use the commands {SUPALL}{TABDELIMIT}{ROWREPEAT}{NOIDENTGEN} these will format the output in the way ODI can interpret.
I am going to use the same interface as used in the Calc Script method so I wrote the report to replicate the same output.
In the interface I had to set the options
EXTRACTION_QUERY_TYPE – ReportScript
EXTRACTION_QUERY_FILE – extract or you can put in the fully qualified file path if it is not part of the essbase database.
EXT_COL_DELIMITER - The report is tab delimited so \t is used.
The final method that can be used is an MDX query; I am going to use the same interface so I once again wrote the script to output the script in the same format.
I am sure the MDX could be wrote in different ways but for this exercise it produces the desired results.
In the interface I updated the options
EXTRACTION_QUERY_TYPE – MDXQuery
EXTRACTION_QUERY_FILE – This has to be the fully qualified name of the MDX script
EXT_COL_DELIMITER – MDX output is tab delimited by default so \t was used.
So there we have it a brief look into the different ways you can extract data from essbase, each one has its own merits and drawbacks.
That concludes looking at what can be done with the KMs for essbase and planning, next I am going to move onto the other areas of interest in ODI such as packages and variables.
Sunday, 30 November 2008
ODI Series – Loading data into essbase
Today I am going to look at loading data into essbase using ODI, now if you have followed previous blogs such as loading metadata then it is pretty much along the same lines except using a different knowledge module. (LKM Hyperion Essbase DATA to SQL)
I am also going to go into a more detail on how the API loads data into essbase and what to be aware of.
To start with you need your source and this time I am going to use a pre-populated database table, the process is nearly identical if you are going to use a flat file as a source obviously you just need a different source DataStore.
As always the next step is to create a reversed DataStore of the db table.
Once you have selected the table and then clicked reverse it should create the new DataStore.
The target is going to be the essbase database Sample.Basic, the DataStore was created in a previous blog
The interface can now be created.
In the diagram tab, drag the SQL table DataStore to the source and the Essbase data DataStore to the target, as my source column names are the same as the target everything is automapped.
In the flow section make sure you have the IKM set to “IKM SQL to Hyperion Essbase (DATA)”
Understanding some of the options are important when using this IKM.
CLEAR_DATABASE :- This will execute the following calculation script commands for block storage cubes (CLEARBLOCK ALL, CLEARBLOCK UPPER, CLEARBLOCK NONINPUT), for ASO cubes cubes you can only select ALL and it will clear all data.
MAXIMUM_ERRORS_ALLOWED :- If this is set to 0 it will ignore any errors such as a member not found and keep trying to load.
COMMIT_INTERVAL :- This number will be the amount of records of data that are sent to essbase in a chunk. Setting this amount to a large value can cause performance issues if you have any records that get rejected, I will go into this in more detail shortly.
RULES_FILE :- You don’t have to use a rules file with this IKM but an important thing to note is that without a rules file the data will always be sent as “overwrite” and you will need to use a rules file if you want to add to existing data.
Executing the interface will then load the data into essbase, any data which couldn’t be loaded will be logged in the error log file.
This is all fine but recently there was a post about the performance of using ODI to load data into essbase, as I always want to know how it all works and I am never satisfied until I know so I went about looking into behind the scenes of the data loading.
When an interface is run it calls upon a specific knowledge module depending on the integration, the hyperion KMs then call on Java classes to do all the work, these classes are contained in the jar files in the oracledi\drivers directory.
The classes also rely on the essbase Java API. When a dataload is run a chunk of the source data is grabbed, the amount of data in each chunk depends on the value you have set in the COMMIT_INTERVAL option, the default being 1000 records.
What happens now is the data is streamed in using the API until all the records have been processed and the data is then committed to essbase. This process is repeated until all the chunks of source data have been processed.
The methods used in the Java API are :-
beginDataLoad - which starts the update to the esssbase database.
If you look in your essbase application log file you can see when this command has been called.
Received Command [StreamDataload] from user [hypadmin]
sendString - which sends a string of data to the essbase database.
endDataload - commits all the data that has been sent from each sendString method.
This process works fine unless there are some invalid records e.g. records with members that don’t exist in the dimension it is being loaded against.
The data will not be rejected until the endDataload method is called, so say you have the commit interval set to 1000, 1000 records of data will still be sent to essbase and it will be all rejected if any of it is invalid.
I updated my source data to send a record of “Florida1” instead of “Florida”, this is what you will see in the essbase app log.
Sun Jan 11 13:17:18 2025]Local/SAMPLE/Basic/hypadmin/Error(1003014)
Unknown Member [Florida1] in Data Load, [999] Records Completed
[Sun Jan 11 13:17:18 2025]Local/SAMPLE/Basic/hypadmin/Error(1003050)
Data Load Transaction Aborted With Error [1003050]
To compensate for this the error in the data the code then tries to load each record one by one from the start of the chunk until it has processed all the records in the chunk.
So if you have the commit interval as 1000 then it will try and load each of them 1000 records one by one until it has processed the 1000 records and then goes back into the stream loading of the next chunk, if it hits another error then it will go into single record update again.
If you look at the log the interface produces you will see if enters single record loading.
2025-11-01 14:44:51,375 DEBUG [DwgCmdExecutionThread]: Error occured in sending record chunk...Cannot end dataload. Analytic Server Error(1003014): Unknown Member [Florida1] in Data Load, [13] Records Completed
2025-11-01 14:44:51,375 DEBUG [DwgCmdExecutionThread]: Sending data record by record to essbase
This adds an extremely large overhead in the processing time and it is a really inefficient way of processing data, hopefully one day there will be some updates to the Java code that take into account some of these issues. Until then it is something to watch for and you will have to play around with the commit interval to find the fastest processing time or try and make sure your data is clean.
Hopefully that gives you an insight into what is happening when loading data, you should also be able create your own essbase dataload interfaces.
I am also going to go into a more detail on how the API loads data into essbase and what to be aware of.
To start with you need your source and this time I am going to use a pre-populated database table, the process is nearly identical if you are going to use a flat file as a source obviously you just need a different source DataStore.
As always the next step is to create a reversed DataStore of the db table.
Once you have selected the table and then clicked reverse it should create the new DataStore.
The target is going to be the essbase database Sample.Basic, the DataStore was created in a previous blog
The interface can now be created.
In the diagram tab, drag the SQL table DataStore to the source and the Essbase data DataStore to the target, as my source column names are the same as the target everything is automapped.
In the flow section make sure you have the IKM set to “IKM SQL to Hyperion Essbase (DATA)”
Understanding some of the options are important when using this IKM.
CLEAR_DATABASE :- This will execute the following calculation script commands for block storage cubes (CLEARBLOCK ALL, CLEARBLOCK UPPER, CLEARBLOCK NONINPUT), for ASO cubes cubes you can only select ALL and it will clear all data.
MAXIMUM_ERRORS_ALLOWED :- If this is set to 0 it will ignore any errors such as a member not found and keep trying to load.
COMMIT_INTERVAL :- This number will be the amount of records of data that are sent to essbase in a chunk. Setting this amount to a large value can cause performance issues if you have any records that get rejected, I will go into this in more detail shortly.
RULES_FILE :- You don’t have to use a rules file with this IKM but an important thing to note is that without a rules file the data will always be sent as “overwrite” and you will need to use a rules file if you want to add to existing data.
Executing the interface will then load the data into essbase, any data which couldn’t be loaded will be logged in the error log file.
This is all fine but recently there was a post about the performance of using ODI to load data into essbase, as I always want to know how it all works and I am never satisfied until I know so I went about looking into behind the scenes of the data loading.
When an interface is run it calls upon a specific knowledge module depending on the integration, the hyperion KMs then call on Java classes to do all the work, these classes are contained in the jar files in the oracledi\drivers directory.
The classes also rely on the essbase Java API. When a dataload is run a chunk of the source data is grabbed, the amount of data in each chunk depends on the value you have set in the COMMIT_INTERVAL option, the default being 1000 records.
What happens now is the data is streamed in using the API until all the records have been processed and the data is then committed to essbase. This process is repeated until all the chunks of source data have been processed.
The methods used in the Java API are :-
beginDataLoad - which starts the update to the esssbase database.
If you look in your essbase application log file you can see when this command has been called.
Received Command [StreamDataload] from user [hypadmin]
sendString - which sends a string of data to the essbase database.
endDataload - commits all the data that has been sent from each sendString method.
This process works fine unless there are some invalid records e.g. records with members that don’t exist in the dimension it is being loaded against.
The data will not be rejected until the endDataload method is called, so say you have the commit interval set to 1000, 1000 records of data will still be sent to essbase and it will be all rejected if any of it is invalid.
I updated my source data to send a record of “Florida1” instead of “Florida”, this is what you will see in the essbase app log.
Sun Jan 11 13:17:18 2025]Local/SAMPLE/Basic/hypadmin/Error(1003014)
Unknown Member [Florida1] in Data Load, [999] Records Completed
[Sun Jan 11 13:17:18 2025]Local/SAMPLE/Basic/hypadmin/Error(1003050)
Data Load Transaction Aborted With Error [1003050]
To compensate for this the error in the data the code then tries to load each record one by one from the start of the chunk until it has processed all the records in the chunk.
So if you have the commit interval as 1000 then it will try and load each of them 1000 records one by one until it has processed the 1000 records and then goes back into the stream loading of the next chunk, if it hits another error then it will go into single record update again.
If you look at the log the interface produces you will see if enters single record loading.
2025-11-01 14:44:51,375 DEBUG [DwgCmdExecutionThread]: Error occured in sending record chunk...Cannot end dataload. Analytic Server Error(1003014): Unknown Member [Florida1] in Data Load, [13] Records Completed
2025-11-01 14:44:51,375 DEBUG [DwgCmdExecutionThread]: Sending data record by record to essbase
This adds an extremely large overhead in the processing time and it is a really inefficient way of processing data, hopefully one day there will be some updates to the Java code that take into account some of these issues. Until then it is something to watch for and you will have to play around with the commit interval to find the fastest processing time or try and make sure your data is clean.
Hopefully that gives you an insight into what is happening when loading data, you should also be able create your own essbase dataload interfaces.