Well here it is the end of a journey through the world of the data integrator, I say the final part but there will be more blogs in the future covering anything that I come across that is useful and needs to be shared.
Today I will be looking at scheduling your integrations using the Operator and also how to execute them over the web using the java invocation API.
Ok, you have your integration all up and running though you want to be able to run it on a scheduled basis, this is how it is done.
The first thing that needs to be done is to create a scenario; the scenario compiles all the elements of your integration ready to be executed. A scenario can be created from an interface, package, procedure or variable, for this example I am going to use a previously created package.
There are a couple of ways of generating a scenario, you can right click over your component and choose “Generate Scenario”
Or open the component, go to the scenarios tab and generate
If it is a package you are scheduling and it contains variables you will be presented with the option to be able set the value when the scenario is executed.
If you execute the scenario then the session variable window will open where you can enter the value for the selected variables
If you change the component such as a package in any way you will need to regenerate the scenario again to compile it with the changes, this can easily be done from the right click menu.
Right clicking the scenario also gives the option to schedule.
This will open the scheduling window that has a multitude of options to suite your requirements
There is also a variables tab to let you set what the initial value of the variables will be; once these parameters have been set and applied they will be sent to the operator.
Opening the operator and selecting the Scheduling tab will display the information about all the scheduled scenarios.
You can also schedule the scenarios directly from the scenarios section in the Operator and that’s all there is to it, couldn’t be easier.
If you want to execute a scenario from a Java application or over the web then one way of doing this is to use the provided invocation java package.
Within the ODI directory installation (\OraHome_1\oracledi\tools\web_services) there is a file name odi-public-ws.aar, the file is a aar type which stands for Apache Axis Archive, axis is Apache’s java platform for creating and web services applications which I am not going to be going into today.
What I am interested in is inside this file, in the lib directory is the java archive package odi-sdk-invocation.jar.
Once the file has been extracted from the aar file then you are ready to use it in with your Java IDE of choice.
The API documentation is available at \OraHome_1\oracledi\doc\sdk\invocation\index.html
One thing to be aware of is that you need to have an agent created to use the API with the example I am going to show, I went through the agent setup in an earlier blog
First create a new instance of the OdiCommandScenario object and set the scenario and context information.
If it is a package and has variables associated with it you can also set them.
You can also set whether scenario is executed in synchronous mode (waits for the scenario to complete executing) or asynchronous mode (executes scenario and returns).
Next is to use the OdiRepositoryConnection object.
There are a number of JDBC connection methods but these are not needed if using the scheduler agent.
With all the parameters set the odiInvocation object can be initialized with the host name and port of the agent.
If the java is going to be executed on the same machine as the ODI agent and if using port 20910 then it can be initialized without any constructors.
The invokeCommand method can then be used passing in the connection and scenario objects to execute the scenario.
Executing the scenario will return an odiInvocationResult object, the isOk method returns true or false depending on the request be successful or not.
It is also has a method - getSessionNumber which returns the session id of the executed scenario
So there we have a really simple example of executing a scenario using the Java API, now if we want to run this from the web then it can be easily transferred to a JSP script.
Yes, it’s very basic and only runs a specific scenario then outputs the session id if it was successful but the script can be updated in many way like using dropdowns to select the scenario.
With a bit of work here is an example of what is possible combining JSP, Java classes & Ajax :- Have a look Here
Well that completes the ODI Series, I believe I have covered the most important aspects of ODI and integrating it with planning and essbase. I know the series has proved quite popular and I am glad it has helped people starting out with ODI.
OK, what next ….
A delve into the disturbing world of EPM, uncovering the mysteries and facts beneath the shiny surface, yawn your way through yet another blog, let's hope this one is interesting...
Sunday, 25 January 2009
Saturday, 17 January 2009
Planning V11 Sample application bug
Usually I wouldn’t really highlight a bug but I am not sure when Oracle will release any patches for V11 and this bug is going to affect people if they are going to start learning the new version of planning with the provided sample application. This bug applies to versions 11.1.1.1.0 & 11.1.1.0.0
When you create a sample application it creates the start year to the current year set on the machine running planning.
There is no problem creating the actual application it is only when you try and initialize the application the problem occurs.
Not the best start if you are a newcomer to planning and the last thing you want to be doing is trying to figure out what went wrong.
The problem relates to a year member FY08
When initializing a new member called Actual is created in the Scenario dimension, next it tries to set the start year for Actual to FY08, FY08 does not exist because the application was automatically created starting at 2009 so it fails.
The solution for now is simple, you will need to delete first delete the sample application, then update the clock on the planning machine and set it to 2008. Restart the planning service and create the sample application again.
This time the start year is set to 2008 so in planning will show as FY08 and when you initialize it should run through without any problems
The year can be returned back the correct setting on the planning machine. No doubt this will be fixed in a future patch and I will update this blog when it becomes available, until then I am sure more people are going to encounter this issue, at least there is an easy workaround.
When you create a sample application it creates the start year to the current year set on the machine running planning.
There is no problem creating the actual application it is only when you try and initialize the application the problem occurs.
Not the best start if you are a newcomer to planning and the last thing you want to be doing is trying to figure out what went wrong.
The problem relates to a year member FY08
When initializing a new member called Actual is created in the Scenario dimension, next it tries to set the start year for Actual to FY08, FY08 does not exist because the application was automatically created starting at 2009 so it fails.
The solution for now is simple, you will need to delete first delete the sample application, then update the clock on the planning machine and set it to 2008. Restart the planning service and create the sample application again.
This time the start year is set to 2008 so in planning will show as FY08 and when you initialize it should run through without any problems
The year can be returned back the correct setting on the planning machine. No doubt this will be fixed in a future patch and I will update this blog when it becomes available, until then I am sure more people are going to encounter this issue, at least there is an easy workaround.
Sunday, 11 January 2009
ODI Series – A few extra tips
Before I move on to the final part of my ODI series I thought I would just point down a couple of things I have come across while looking at ODI, I thought I had better write them down because they hopefully help others that come across the problem and also jog my terrible memory if I am asked again.
Essbase Loading Multiple UDAs
It is simple load multiple UDAs with planning and ODI but it doesn’t seem the functionality has been built in by standard with the current KMs for essbase, saying that it is not difficult to get round it.
By standard reversing an essbase cube into a DataStore creates one column for a UDA, it does the same for planning but you can separate each UDA with the use of a comma in your source, I did try this at first with essbase but it just created one long UDA string with all the commas.
For this integration I am going to be loading multiple UDAs into the product dimension from a flat file.
First all I duplicated an existing reversed essbase DataStore, it is entirely your decision if you want to do this or not.
Next I expanded the product dimension and inserted a new column and named it UDA2.
Now you would carry on as you would usually, create your source file including the UDAs
Reverse the file to create a DataStore
Create the essbase load rule and test with the source file.
Create the interface
Execute.
All done, nice and simple.
This method can be used in other areas such as ASO cubes and trying to load solve order information. By default when you reverse a cube it does not distinguish between a BSO and an ASO cube so you always end up with the same columns., when reversing the Java code uses a file called EssbaseMetadata.xml which holds the DataStore definitions, the XML file is located in the odihapp_essbase.jar
This is fine when it comes to loading solve order but unfortunately if you want to extract solve order information I don’t think with the current code it is possible.
In the Java code is uses one of the API IEssMemberSelection methods called executeQuery
The query does not include Solve Order so that removes the ability to ever return the value, looking at the essbase Java API documentation it doesn’t even look like you can use the executeQuery method with Solve Order anyway and would have to use another way with the API to retrieve the value, so unless you are willing to rewrite the code it looks like it won’t be possible until Oracle add some extra functionality to the KM drivers.
Next a really simple tip for when you are trying to sum data from a source file into Planning, by default when loading data into planning it loads one record at time and is set to overwrite so if you had say the following data.
If you try and use ODI to load the file into planning each record would overwrite the last you would end up loading only the value of 3422 when you wanted the summed value of 7286.
Now if you were using a SQL database as your source you could easily create a view to return the grouped value but we are using a file, so how is it done without modifying the source.
First make sure you source DataStore data field is set to a Numeric Type.
In your interface if you enter the expression editor and use the SUM function on your target data column.
Using the SUM function will automatically create a group by query on the source data so the result is summed.
So there we go a couple of tips, easy and simple if you know how.
Essbase Loading Multiple UDAs
It is simple load multiple UDAs with planning and ODI but it doesn’t seem the functionality has been built in by standard with the current KMs for essbase, saying that it is not difficult to get round it.
By standard reversing an essbase cube into a DataStore creates one column for a UDA, it does the same for planning but you can separate each UDA with the use of a comma in your source, I did try this at first with essbase but it just created one long UDA string with all the commas.
For this integration I am going to be loading multiple UDAs into the product dimension from a flat file.
First all I duplicated an existing reversed essbase DataStore, it is entirely your decision if you want to do this or not.
Next I expanded the product dimension and inserted a new column and named it UDA2.
Now you would carry on as you would usually, create your source file including the UDAs
Reverse the file to create a DataStore
Create the essbase load rule and test with the source file.
Create the interface
Execute.
All done, nice and simple.
This method can be used in other areas such as ASO cubes and trying to load solve order information. By default when you reverse a cube it does not distinguish between a BSO and an ASO cube so you always end up with the same columns., when reversing the Java code uses a file called EssbaseMetadata.xml which holds the DataStore definitions, the XML file is located in the odihapp_essbase.jar
This is fine when it comes to loading solve order but unfortunately if you want to extract solve order information I don’t think with the current code it is possible.
In the Java code is uses one of the API IEssMemberSelection methods called executeQuery
The query does not include Solve Order so that removes the ability to ever return the value, looking at the essbase Java API documentation it doesn’t even look like you can use the executeQuery method with Solve Order anyway and would have to use another way with the API to retrieve the value, so unless you are willing to rewrite the code it looks like it won’t be possible until Oracle add some extra functionality to the KM drivers.
Next a really simple tip for when you are trying to sum data from a source file into Planning, by default when loading data into planning it loads one record at time and is set to overwrite so if you had say the following data.
If you try and use ODI to load the file into planning each record would overwrite the last you would end up loading only the value of 3422 when you wanted the summed value of 7286.
Now if you were using a SQL database as your source you could easily create a view to return the grouped value but we are using a file, so how is it done without modifying the source.
First make sure you source DataStore data field is set to a Numeric Type.
In your interface if you enter the expression editor and use the SUM function on your target data column.
Using the SUM function will automatically create a group by query on the source data so the result is summed.
So there we go a couple of tips, easy and simple if you know how.
Sunday, 4 January 2009
ODI Series - Putting it all together
In one of my last blogs I ended up with a couple of interfaces which I said could be automated with the use of the package, today I am going to go over the basics of packages and bring variables into the equation.
A package lets you execute a sequence of steps in an organised diagram, the steps can be :-
Executing an interface
Executing a procedure
Setting, declaring, evaluating or refreshing a variable,
Executing ODI tools, there are a number of tools provided which perform specific tasks at run-time. These tasks can be such as waiting for data to arrive, ftp’ing information or sending emails. There is a full pdf document available describing each tool and its parameters which can be downloaded from here
Say you have a situation where you receive files on a daily basis, when the files arrive this this information needs to be loaded into an essbase cube, once completed an area of metadata has to be extracted from the cube loaded into planning, planning refreshed, files archived and finally send an email informing that the process has been completed, this can achieved quite easily using a package and I am go through the steps to automate this scenario.
When you create a new package you start with a blank diagram.
Before I start adding objects to the diagram I am going to create some variables. A variable is an object that stores a single value. This value can be a string, a number or a date.
The value is stored in ODI, and can be updated at run-time.
I am going to create a variable that will store the location of the archive directory, a variable that will store the log directory and a variable that hold the current date.
To create a variable it is simply down by right clicking variable in the designer and selecting insert variable.
Datatypes : Alphanumeric (255 characters), Date, Numeric (Maximum 10 digits) or Text (unlimited length).
Action : This parameter shows the length of time the value of a variable is kept for:
Non-persistent : The value of the variable is kept in memory for a whole session.
Last value : ODI stores in its repository the latest value held by the variable.
Historize : ODI keeps a history of all the values held by this variable.
I created another variable just like the one above for the log directory.
To create the variable to hold the date it is exactly the same process, except you need to enter information into the refreshing tab, when you refresh a variable in the package it will perform a query based on what has been entered and the schema chosen.
In the schema dropdown I selected a schema that I had set up in the topology manager to look at SQL Server, I have chosen this, as I want to run a query against SQL server to return the current date.
The SQL to return the date is :- SELECT CONVERT(VARCHAR(10),GETDATE(),105)
This will return the current date using the function GETDATE() and format it using the CONVERT function.
If you are using Oracle then you can use "SELECT sysdate FROM dual" and use something like TO_CHAR to format it.
You can use the refreshing option in various different ways for example you could have values stored in a field in a database table and set the variable value by retrieving the information from the field.
When using variables in a package you declare them and then you can use them as an option value in a KM.
So if you open up one of the interfaces that is going to be used in the package, you use "#variable name", as shown below where I have used the LogDirectory variable.
Once the variable has been declared in the package then it will be passed into the interface, please note though this will not work if you just run the interface out of the package.
Right back to the package, the first step in our scenario is to wait for a for a file, to do this then one of the ODI tools is required – OdiFileWait
All the parameters are explained in full detail in the ODI tools documentation.
The settings that I used:-
Action - Move – this will mean when the file arrives it will be moved.
Filename Mask – any files starting measures*.csv will be checked for.
Target Directory – the file will be moved to this directory.
Target File – the file will be renamed to Measures.csv to align with the naming in the interface.
Overwrite Files – if the file already exists in the target directory it will be overwritten.
Timeout - 0 – This is the time to check for the file, 0 means an infinite time.
Polling Interval – time in milliseconds to check for the file.
It is possible to use variables for any of the parameters, all depends which settings are likely to change and can be controlled easier with the use of variables.
The next stage would be to run the interface that loads the measures into the essbase cube, as the interface uses the log directory variable it will need to be declared first, to do this you drag the variable on to the diagram and set the type to “Declare Variable”.
To create the process between the steps is done in the advanced section, it gives the ability to set what step to perform next depending if the current step is successful or fails.
The interface can be dragged onto the diagram, if the interface fails then I want the process to send a failure email and then stop, this can be achieved by using another tool – OdiSendMail.
If the process fails an email will be sent with the interface log files attached, the variable #LogDirectory will be converted into the correct stored value.
After the essbase load process has completed the next step is to add the interfaces that extract the metadata from essbase and load it into planning, I am using the interfaces from a previous blog.
Now if any of the newly added interfaces fail you could add another email failure object but this can get messy and unmanageable in the end, one way of streamlining could be the use of variables.
If you create a new variable and set it “Not Persistent” as the value is only important in the current session, in the refreshing section we can use one of the ODI API functions <%=odiRef.getPrevStepLog("STEP_NAME")%>, this will retrieve the step name from where the error occurred, you still need to use it as a query so we end up with :-
If you are using Oracle then the query would be :- SELECT '<%=odiRef.getPrevStepLog("MESSAGE")%>' FROM DUAL
In the email body we can use the variable to highlight where the step failed.
In the diagram at each point of failure the process can be updated.
This means whenever a process fails it will refresh the variable with the step name and place that in the body of the email.
Ok, say all the processes have completed we want to archive the metadata file with the current date on it.
First we need to use the date variable and refresh it.
The archive directory value also needs to be dragged onto the diagram and then the use of another ODI tool called OdiFileMove is required.
This will take the file Measures.csv from the E:\FileStore directory (most of the time I would of used a variable for the file store directory and used it throughout) and move it into the archive directory renaming it to Measures(current date).csv e.g. Measures01-01-09.csv
Finally you could use the tool OdiZip to compress the file and then send out the success email.
Another useful function of variables is the ability to evaluate them and execute a step depending on the Boolean result, for example say you have a daily and weekly process, if a variable is set to weekly it would run the weekly process otherwise run the daily process.
You can also use a variable in a DataStore, for example if have a file DataStore and use it in an interface to create an output text file you could use the Select date variable to always output the file with the current date on it.
As long as you set the variable in the package you will be able to pass it to the DataStore.
There are many possibilities with using variables in your packages and with all the provided ODI tools you can be build up complex packages. I only just touched the surface today to give a start to using them and will hopefully give you ideas to achieve your integrations.
Next I will move on to running integrations over the web and putting the invocation API to good use.
A package lets you execute a sequence of steps in an organised diagram, the steps can be :-
Executing an interface
Executing a procedure
Setting, declaring, evaluating or refreshing a variable,
Executing ODI tools, there are a number of tools provided which perform specific tasks at run-time. These tasks can be such as waiting for data to arrive, ftp’ing information or sending emails. There is a full pdf document available describing each tool and its parameters which can be downloaded from here
Say you have a situation where you receive files on a daily basis, when the files arrive this this information needs to be loaded into an essbase cube, once completed an area of metadata has to be extracted from the cube loaded into planning, planning refreshed, files archived and finally send an email informing that the process has been completed, this can achieved quite easily using a package and I am go through the steps to automate this scenario.
When you create a new package you start with a blank diagram.
Before I start adding objects to the diagram I am going to create some variables. A variable is an object that stores a single value. This value can be a string, a number or a date.
The value is stored in ODI, and can be updated at run-time.
I am going to create a variable that will store the location of the archive directory, a variable that will store the log directory and a variable that hold the current date.
To create a variable it is simply down by right clicking variable in the designer and selecting insert variable.
Datatypes : Alphanumeric (255 characters), Date, Numeric (Maximum 10 digits) or Text (unlimited length).
Action : This parameter shows the length of time the value of a variable is kept for:
Non-persistent : The value of the variable is kept in memory for a whole session.
Last value : ODI stores in its repository the latest value held by the variable.
Historize : ODI keeps a history of all the values held by this variable.
I created another variable just like the one above for the log directory.
To create the variable to hold the date it is exactly the same process, except you need to enter information into the refreshing tab, when you refresh a variable in the package it will perform a query based on what has been entered and the schema chosen.
In the schema dropdown I selected a schema that I had set up in the topology manager to look at SQL Server, I have chosen this, as I want to run a query against SQL server to return the current date.
The SQL to return the date is :- SELECT CONVERT(VARCHAR(10),GETDATE(),105)
This will return the current date using the function GETDATE() and format it using the CONVERT function.
If you are using Oracle then you can use "SELECT sysdate FROM dual" and use something like TO_CHAR to format it.
You can use the refreshing option in various different ways for example you could have values stored in a field in a database table and set the variable value by retrieving the information from the field.
When using variables in a package you declare them and then you can use them as an option value in a KM.
So if you open up one of the interfaces that is going to be used in the package, you use "#variable name", as shown below where I have used the LogDirectory variable.
Once the variable has been declared in the package then it will be passed into the interface, please note though this will not work if you just run the interface out of the package.
Right back to the package, the first step in our scenario is to wait for a for a file, to do this then one of the ODI tools is required – OdiFileWait
All the parameters are explained in full detail in the ODI tools documentation.
The settings that I used:-
Action - Move – this will mean when the file arrives it will be moved.
Filename Mask – any files starting measures*.csv will be checked for.
Target Directory – the file will be moved to this directory.
Target File – the file will be renamed to Measures.csv to align with the naming in the interface.
Overwrite Files – if the file already exists in the target directory it will be overwritten.
Timeout - 0 – This is the time to check for the file, 0 means an infinite time.
Polling Interval – time in milliseconds to check for the file.
It is possible to use variables for any of the parameters, all depends which settings are likely to change and can be controlled easier with the use of variables.
The next stage would be to run the interface that loads the measures into the essbase cube, as the interface uses the log directory variable it will need to be declared first, to do this you drag the variable on to the diagram and set the type to “Declare Variable”.
To create the process between the steps is done in the advanced section, it gives the ability to set what step to perform next depending if the current step is successful or fails.
The interface can be dragged onto the diagram, if the interface fails then I want the process to send a failure email and then stop, this can be achieved by using another tool – OdiSendMail.
If the process fails an email will be sent with the interface log files attached, the variable #LogDirectory will be converted into the correct stored value.
After the essbase load process has completed the next step is to add the interfaces that extract the metadata from essbase and load it into planning, I am using the interfaces from a previous blog.
Now if any of the newly added interfaces fail you could add another email failure object but this can get messy and unmanageable in the end, one way of streamlining could be the use of variables.
If you create a new variable and set it “Not Persistent” as the value is only important in the current session, in the refreshing section we can use one of the ODI API functions <%=odiRef.getPrevStepLog("STEP_NAME")%>, this will retrieve the step name from where the error occurred, you still need to use it as a query so we end up with :-
If you are using Oracle then the query would be :- SELECT '<%=odiRef.getPrevStepLog("MESSAGE")%>' FROM DUAL
In the email body we can use the variable to highlight where the step failed.
In the diagram at each point of failure the process can be updated.
This means whenever a process fails it will refresh the variable with the step name and place that in the body of the email.
Ok, say all the processes have completed we want to archive the metadata file with the current date on it.
First we need to use the date variable and refresh it.
The archive directory value also needs to be dragged onto the diagram and then the use of another ODI tool called OdiFileMove is required.
This will take the file Measures.csv from the E:\FileStore directory (most of the time I would of used a variable for the file store directory and used it throughout) and move it into the archive directory renaming it to Measures(current date).csv e.g. Measures01-01-09.csv
Finally you could use the tool OdiZip to compress the file and then send out the success email.
Another useful function of variables is the ability to evaluate them and execute a step depending on the Boolean result, for example say you have a daily and weekly process, if a variable is set to weekly it would run the weekly process otherwise run the daily process.
You can also use a variable in a DataStore, for example if have a file DataStore and use it in an interface to create an output text file you could use the Select date variable to always output the file with the current date on it.
As long as you set the variable in the package you will be able to pass it to the DataStore.
There are many possibilities with using variables in your packages and with all the provided ODI tools you can be build up complex packages. I only just touched the surface today to give a start to using them and will hopefully give you ideas to achieve your integrations.
Next I will move on to running integrations over the web and putting the invocation API to good use.