Well it certainly been a long time in the making but at last the patch has been released to tackle the essbase data issue, the problem manifests itself when performing a data load and a rejected record is hit e.g. an unknown member, the load reverts into loading record by record instead of the bulk commit size set in the KM options.
If you want to find more information on the bug, have a read of one of my earlier blogs here.
The patch is available on metalink and here is all the information you need.
ORACLE DATA INTEGRATOR 10.1.3.5.2_02 ONE-OFF PATCH
Patch id - 8785893
8589752: IKM SQL to Essbase Data - Load in bulk mode instead of row by row processing when an error occurs during load
The patch has to be installed on a version 10.1.3.5.0 and greater
The patch itself is only 59KB, it is just a replacement jar - odihapp_essbase.jar
To install the patch shut down any ODI processes such Designer, Topology Manager, Security Manager, Operator or Agent.
Rename the existing odihapp_essbase.jar that is in oracledi\drivers and extract the patch jar to the same location.
Then you can restart everything back up.
So does it work, well lets have a quick go. First all what you need to be aware of is that if you don’t use an essbase load rule the problem will still exist, the fix only works with the use of a load rule.
Back to my trusty quick example that loads data from a database table into the sample basic essbase database.
I have put a deliberate error for Market to trip up the data load
Here is example of the log file before applying the patch
And here is the log file after patching
And an error file is produced with the rejected records, so looks like it is a success, I didn’t have a chance to test a large set of data but it looks to be using the same method as a standard essbase data load using a rules file so performance should be the same. Good news at last!!!