Sunday, 15 February 2009

ODI Series – Essbase/Planning – Automating changes in data - Part 1

I said I had finished the ODI series but by popular demand I thought I would return with another instalment, today I am going to be looking at automating the loading of fresh data into essbase and in the next blog I will look at the same concept but with planning metadata.

So what does this mean, well say you have a data store of some description and the data gets automatically incremented, instead of having to load all the data into the database at a set interval you can use ODI to capture the additional data and with the use of a package monitor when the change occurs resulting in a load executing.

In ODI terms this process is known as Changed Data Capture (CDC), CDC is performed by journalizing models. Journalizing a model consists of setting up the components to capture the changes (inserts, updates and deletes) made to the records of the models datastores.

ODI has two journalizing modes

• Simple Journalizing tracks changes in individual datastores in a model.

• Consistent Set Journalizing tracks changes to a group of the model's datastores, taking into account the referential integrity between these datastores.

The journalizing components are :

• Journals: Where changes are recorded. Journals only contain references to the changed
records along with the type of changes (insert/update, delete).

• Capture processes: Journalizing captures the changes in the source datastores either by
creating triggers on the data tables.

• Subscribers: CDC uses a publish/subscribe model. Subscribers are entities (applications,
integration processes, etc) that use the changes tracked on a datastore or on a consistent set.
They subscribe to a model's CDC to have the changes tracked for them. Changes are
captured only if there is at least one subscriber to the changes. When all subscribers have
consumed the captured changes, these changes are discarded from the journals.

• Journalizing views: Provide access to the changes and the changed data captured. They are
used by the user to view the changes captured, and by integration processes to retrieve the
changed data.

Ok, as usual I don’t like to get too bogged down in lengthy spiel so I think we will get right into how CDC can be set up and what is it actually doing, now I understand this topic may have been covered by others in different forms but I wanted to write about in a way that most could understand and relate it to the essbase/planning world.

To keep everything simple I will be using the sample.basic essbase database as the target, the source will be a SQL server db table, I am also going to assume that you have enough understanding about the concepts of ODI, if you don’t then I suggest reading through previously blogs.

The above table will be used as the source and any changes to this table will need to be captured, before you use CDC on a table make sure it has a primary key otherwise you won’t be able to use it.

First the Model is created that is going to hold the journalized tables.

The table that is going to be monitored for changes is selected and reversed.

The next step will be to choose the Journalizing Knowledge Module, if you are using SQL server you will need to have imported in the following KMs
  • JKM MSSQL Consistent
  • JKM MSSQL Simple
For this example we will be only tracking changes in one table so we will be using the Simple KM.

If you are using Oracle there are a number of KMs available depending version.

The next step is to flag the datastore that is going to be journalized, right click the datastore and select “Changed Data Capture” > Add to CDC

If you edit your model again and choose the “Journalized Tables” tab the datastore should be visible.

A subscriber (described earlier) should be added next, to do so right click the datastore > “Changed Data Capture” > Subscriber > Subscribe

The name of the subscriber can be anything that seems relevant.

Once you click ok then a session is executed and as this is the first time a subscriber has been created then a db table (SNP_SUBSCRIBERS) is created and populated with the subscriber information.

Ok, now the journal can be started that will capture the changes. Right click the datastore > “Changed Data Capture” > “Start Journal”, select the subscriber and another session will be started; this will create the required journal tables, views and triggers.

I will try and explain the details of what happens in the session:-

Two triggers (DT$<CDC table> & UT$<CDC table>) are created on the table that is going to be monitored, so in my example UT$Sample_Data & DT$Sample, one trigger will monitor for any updates or inserts and the other monitors deletions. Once a change occurs a table named J$<CDC Table> (J$Sample_Data) table will be updated.

A table named J$<CDC table> is created, as the table I am using in this example is called “Sample_Data” a table named J$Sample_Data is created; this is the table which is updated once the above trigger is fired.

Two views are created “JV$
<CDC table> & “JV$D<CDC table> so in my example it created “JV$Sample_Data” & “JV$DSample_Data”, these views analyse information from the tables “Sample_Data” and “J$Sample_Data”, basically they will display new insert/updates or delete records.

Once journalizing is active a green icon will be displayed on the datastore.

Inserting a record into the sample_data table

Fires off the trigger and inserts a record into the J$Sample_Data table

The field JRN_SUBSCRIBER is populated with the name of the subscriber which is monitoring the changes, the JRN_FLAG field is populated with
‘I’ meaning an insert has occurred (‘I’ indicates insert/update & ‘D’ indicates delete), and the ID field is populated with the primary key field value relating to the record which has been inserted.

Running the view JV$Sample_Data will return the inserted record and journal information.

Right, back to ODI, you can view the changed data by right clicking the datastore > “Changed Data Capture” > Journal Data

This is performing the same functionality as running the view but in the ODI environment.
Now the journalizing is active an interface can be created which will take the new records and load them into our essbase database.

When you drag a journalized datastore onto the source a check box appears, when you check the box the journalizing columns JRN_FLAG, JRN_DATE and JRN_SUBSCRIBER become available.

A filter is automatically created on JRN_SUBSCRIBER & JRN_DATE columns; make sure you highlight the filter and update the expression as by default the subscriber to filter on is named as ‘SUNOPSIS’, you will need to update this to the name of the subscriber you are using, you can test you have it correct by applying the changes then right clicking the source datastore and selecting “Data”. The filter on JRN_DATE is commented out by default and can be updated if you require to use it.

I added an extra filter on the JRN_FLAG column only to return records that equalled ‘I’, so inserts/updates and not deletes.

The rest of the interface is set up exactly like you would if you were loading data into any essbase database (this has all been covered in previous blogs)

Executing the interface will load any changed data into essbase, due to the nature of simple journalizing once the interface has completed successfully it deletes all the journal information (the JRN_FLAG is not taken into account), so if you require more than one interface in your integration to use the journal information then consistent set journalizing will be required, this will be covered in the next blog.

This interface could be scheduled though if you want to automate the interface to run when a change in data happens then there a number of journalizing tools available.

• OdiWaitForData waits for a number of rows in a table or a set of tables.

• OdiWaitForLogData waits for a certain number of modifications to occur on a journalized
table or a list of journalized tables. This tool calls OdiRefreshJournalCount to perform the
count of new changes captured.

• OdiWaitForTable waits for a table to be created and populated with a pre-determined
number of rows.

• OdiRetrieveJournalData retrieves the journalized events for a given table list or CDC set
for a specified journalizing subscriber. Calling this tool is required if using Database-Specific
Processes to load journalizing tables. This tool needs to be used with specific knowledge
modules. See the knowledge module description for more information.

• OdiRefreshJournalCount refreshes the number of rows to consume for a given table list or
CDC set for a specified journalizing subscriber.

For this example I am going to use the OdiWaitForLogData tool, first a package is created and the tool added on to the diagram.

There are a number of parameters that can be set for the tool. I set it up so it would check the journalized table Sample_Data using the subscriber SampleDataDataSubscriber, it will check every one minute with an infinite timeout and move on to the next step when five rows of data have changed in the journalized table.

All that was left was to add the interface to load the data and once the data has been loaded start monitoring again.

Now after executing the package, the session waits until five records have been added then the data load is initiated.

In the next blog I will look at using CDC with loading Metadata into a planning application, I will look at using separate interfaces depending on whether the metadata changed is a new member or it is a deletion, this will require the use of consistent set journalizing because as we have seen today once one interface has been executed in simple mode all the journal information is deleted.

Sunday, 8 February 2009

Generating emails through business rules.

The idea for this blog originates from a post in the Oracle planning forum, as usual it starts off with a simple question and ends up on a trail of a custom defined function.

The question was aimed at the business rules web launcher and the emailing functionality, after you have validated a business rule you are given the option to send an email once completed.

Once the business rule has completed successfully the email is sent to the address provided.

Notice the "From" address of, the question was asked whether this can be changed. I have never really used the web launcher email functionality so I didn’t know the answer but it was time to find out.
I would of thought that it would of picked up the email address from the users profile within shared services and if it didn’t exist then use a default.

The only setting that I could find was the SMTP server name that is contained in the file that is located in the eas\server directory.

This property can also be set from EAS console.
After delving around in the jar files for the HBR launcher web application I came across Email.class that is part of the package com.hyperion.hbr.common situated in eas\webapps\hbrlauncher\WEB-INF\lib\hbrcommon.jar

In the class there is a method that sets the address of who the email is from but this is obviously not called in the web launcher.

I updated the variable in the class to point to another email address, compiled and placed it back in the web application.

Running the rule again from the launcher still produced an email with the same from address so the class I found was not the one that was being called, after more hunting I found the same class existed in another jar package easserverplugin.jar which resided in the main EAS web application and not in HBR web launcher, updating this class produced an email with the updated address.

So if changing the address of who the emails originate from is important and you are willing to update a little piece of Java then the solution is there.

This problem got me thinking if there were other ways of sending emails from business rules and I knew there was a custom defined function available for essbase that gives you the ability to send emails from calc scripts, seeing as business rules are just glorified calc scripts then this seemed like the right direction to head in. I did have a look at the email CDF that can be downloaded from (here) but I would a go at writing a really simple one to use in conjunction with business rules.

I wanted to be able to use the variable functionality of business rules to set all the properties of sending an email e.g. SMTP server, addresses, subject etc.

To call a CDF in a business rule the command RUNJAVA e.g. RUNJAVA com.hyperion.cdf.emailer “”

The first problem I hit upon was when using the RUNJAVA command it looks like you are not able to use business rule variables as the arguments.

If I changed the variable just for a hard coded value then it would validate, this was no good for what I wanted to achieve because all the properties needed be set by BR variables. Maybe it is possible?

The only way I could see of achieving setting properties would be to create a function and pass them into a Java class method.

With this class I should be able to pass in the email properties and store them ready to be retrieved in the main email class.

The class was compiled and archived into a Jar file then put into the \java\udf under the essbase home directory.

The function required to be registered with essbase and that was done using a maxl command.

create or replace function '@SetEmailProps' as
spec '@SetEmailProps(emailProp,emailVal)';

So now I can use the custom function @SetEmailProps in a business rule script passing in two string parameters one as the key and the other as the value.
The only problem is you can’t just use the function anywhere in a script it has to be treated like a Calc command.

Now there is definitely more elegant ways of getting the function to act as a Calc command but I wrote a quick script that fixed down to a small block set and then used an IF command so I could bring the function into play.

The rule validated and ran without errors, so it was time to create a Java class which would retrieve the properties and send the email based on them.

Basically the class collects the email information which has been stored as part of the previous script, it assigns all of the properties to variables and then sends an email based on them. The class was compiled and added to the existing jar.

As I want to use variables in the business I decided that I would have the email subject, the email source and the SMTP server as global variables that will be constant values.

Each of the Global Variables were set up a String types and given a default value, as they are String each of the values have to be enclosed in double quotes otherwise if they are used in a business rule an error will be generated due to invalid object type.

As it is likely an email will be required to be sent in more than one business rule I decided to place the script in a Macro so it could be called from any business rule and parameters passed in.

When the macro is called three parameters are required to be passed into it, the rule name [parm1], the target email address [parm2] and the body of the email [parm3]. I just chose these as parameters to be passed in to demonstrate what can be achieved; they could easily all be set as global constant variables.

Once all the properties have been set in the script the RUNJAVA command is used to call the SendMail class created earlier to send the email.

In a business rule the macro can be inserted and three parameters need to be passed into it, I created three local variables of String type to do this.

Once the variables have been created they can be added to the macro parameters.

Executing the rule will display the variables that have been defined as run-time prompts; once again it is up to your circumstances whether you use local variables or even prompts.

The rule can be executed from EAS, web launcher

or through planning web either attached to a form or directly.

The macro can be called from any rule or even another macro and executed through one of the above methods giving you added useful functionality for not much effort.