I always encourage questions being asked that have not been covered in my blog and Cameron Lackpour has raised a very good question on the OTN forums.
The question basically is how do you know when there have been any metadata load errors so you can take action, it is true that an error log is produced but there is no real indication that errors have occurred.
There are a number of different methods to attack this problem and I am going to go through a couple of methods that you can use. You would of thought that just checking for the existence of the error log would be enough but unfortunately the error log is recreated each time an interface is executed even if no errors are wrote to it, otherwise the ODI tool “OdiFileWait” could have been used to check for the existence of the file.
One method would be to use something like Jython/Java/OS commands to check the size of the file and if it is not zero bytes then act on it, it is a perfectly acceptable way of dealing with it but not one of my preferred methods.
The first method I am going to take you through is based on the statistics that are generated in one of the steps on the metaload knowledge modules,
I did write a blog on how to report statistics correctly so it is worth having a look through that first to get an understanding of what I am going to be talking about.
Right, I will now consider you know how to report statistics correctly by updating “Report Statistics” step in the knowledge module.
If you run an interface metaload that generates load errors and check out the “Report Statistics” step in the operator you will see the number of records processed and how many of them were errors
So by looking in the operator we can see that 1 record was rejected but we need to be able to act on that error, to do this we need to access the ODI work repository where it stores the statistical information, before we can do that we need to set up a connection to the work repository, I will briefly go through setting up the connection which I am sure you can do for yourself by now.
In the Topology manager create a new Data Server for the technology your ODI work repository is using.
Add the JDBC connection information.
Add the physical schema information
Finally add a logical schema to the context you are using.
With the connection in place we can now access the tables in the work repository, the table that stores the statistical information is “SNP_STEP_LOG”.
As you see there are fields NB_ROW,NB_INS,NB_ERR
These relate to the total number of records, the records inserted and the number of error records, so you can see the values for the interface I ran earlier, the whole record is defined by the session number.
The session number for the interface I ran was 909001. So if you think about it by querying this information you will be able to find out if and how many errors were generated.
That is all fine but how are we going to know what the session number is for the interface being run, luckily ODI has an API command that will tell us just that. The command is odiRef.getSession(“SESS_NO”)
Now we need a way of storing the value produced from the query and once again ODI has just the thing for it – variables, to be specific a refreshing variable, a refreshing variable lets you set the value of a variable by the use of a query.
Create a new variable, it can be set to a numeric type.
In the refreshing tab select the logical schema for the work repository that was set up in the topology manager earlier.
The query is added, note you will need to surround the ODI API with <%= %> for it to be able to compile and execute correctly, you will not be able to validate the query as the query can only use the API commands at runtime.
With this refreshing variable we can now store the number of errors that have been generated, we just need to put this all together in a package and add a check to act on the value of the refreshing variable.
If you drag the variable onto the diagram again and set it as an evaluate variable.
This means we can check the current value of the variable and if it is greater than 0 then act on it.
In my example if the number of errors is greater than 0 then I send an email out advising there were errors, the amount of errors and attaches the error log, if the errors were 0 then I send out a success email. This is just an example you could add whatever process you want depending if there were errors or not.
Well that is the first method covered and it is probably the method I prefer to use.
The second method also uses a refreshing variable but this time the variable will use a query to read the first line of the error log and if it doesn’t contain a first line it errors and follows the next step of a process, if it does contain an error it follows a separate step.
First of all you will need to have a logical schema that points to directory location of the error logs.
If you have ever looked in the operator at the SQL generated when loading from a text file you will notice a long string of ODI specific parameters for the file that is being loaded.
Basically we are just going to lift the code that is generated and put that into a refreshing variable.
If you want to generate the code for yourself then just create a new data store using file technology
In the Files tab just set it is as delimited, you don’t need to change any other options.
In the columns tab manually add a new column of string type and set the lengths to a high value. Now you can just create an interface and use the data store as the source and for the target it doesn’t really matter as we are just after the code that is generated.
Execute the interface, it doesn’t matter if it fails just check the operator for the Load data step.
Now you can just copy the code that is generated.
Create a variable, the data type of Alphanumeric can be chosen.
Set the logical schema to the one that was created earlier that points to the error log directory.
Paste in the code
select C1 C1_C1
from TABLE
/*$$SNPS_START_KEYSNP$CRDWG_TABLESNP$CRTABLE_NAME=
ERROR_LOGSNP$CRLOAD_FILE=E:\FileStore\/enterr.logSNP $CRFILE_FORMAT=DSNP$CRFILE_SEP_FIELD=09SNP
$CRFILE_SEP_LINE=0D0ASNP$CRFILE_FIRST_ROW=0SNP
$CRFILE_ENC_FIELD=SNP$CRFILE_DEC_SEP=SNP$CRSNP
$CRDWG_COLSNP$CRCOL_NAME=C1SNP$CRTYPE_NAME=STRINGSNP
$CRORDER=1SNP$CRLENGTH=1000SNP$CRPRECISION=1000SNP$CR$
$SNPS_END_KEY*/
Now I updated the code slightly, I removed the C1_C1 as that is not really required, I also updated the section
$CRLOAD_FILE=E:\FileStore\/enterr.log
to
$CRLOAD_FILE=<%=odiRef.getSchemaName("D")%>\enterr.log
I updated it to use the ODI API command getSchemaName, this will convert the current schema details at runtime into the full path, this is good practice as if the directory ever changes you only have to change the location in the topology manager.
So the final code is
select C1
from TABLE
/*$$SNPS_START_KEYSNP$CRDWG_TABLE
SNP$CRTABLE_NAME=ERROR_LOG
SNP$CRLOAD_FILE=lt;%=odiRef.getSchemaName("D")%>\enterr.log
SNP$CRFILE_FORMAT=DSNP$CRFILE_SEP_FIELD=09
SNP$CRFILE_SEP_LINE=0D0A
SNP$CRFILE_FIRST_ROW=0SNP$CRFILE_ENC_FIELD=
SNP$CRFILE_DEC_SEP=SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=C1
SNP$CRTYPE_NAME=STRINGSNP$CRORDER=1SNP$CRLENGTH=1000
SNP$CRPRECISION=1000SNP$CR$$SNPS_END_KEY*/
Just to recap, if you use this refreshing variable it will try and read in information in the file, if will either fail if there is no data or succeed if there is data in the file.
Now we can put this into a package like the earlier example
Then just add steps to the success/failure of the refreshing variable.
So there we have it two examples of error handling when using the Hyperion KMs, hopefully you have understood and found it useful what I have tried to explain, I know it can get a little confusing at times. If you want it explaining further then just get in touch.
Until next time!!!
A delve into the disturbing world of EPM, uncovering the mysteries and facts beneath the shiny surface, yawn your way through yet another blog, let's hope this one is interesting...
Sunday, 20 September 2009
ODI Series – Problem with latest essbase patch release.
Just a quick update, it seems like there is an issue with the recent patch release 10.1.3.5.2_02 ONE-OFF PATCH Essbase Fix that resolves the painful issue of ODI loading data record by record to Essbase when it encounters a rejected record.
It is certainly true that it does fix the data loading problem but it seems to have spawned a new issue around the error logging.
Before the patch was applied and a rejected record was encountered then the error log would display…
Sorry if the image is small but the log says :-
Year,Measures,Product,Market,Scenario,Data,Error_Reason
'Mar','Sales','100-10','New York1','Actual','720','Cannot end dataload. Analytic Server Error(1003014): Unknown Member [New York1] in Data Load [1] Records Completed'
After the patch has been applied loading the same data produces the following error log.
Year,Measures,Product,Market,Scenario,Data,Error_Reason
'Mar','Sales','100-10','New York1','Actual','720','Loading failed with error [3303]'
You do get the actual record that was rejected but not the member, this is fine if only a couple of records are rejected but if you have a large number of rejections and need to process them then this becomes a real pain.
This bug has been logged with Oracle (Bug 8912703 - AFTER PATCH 10.1.2.3.5.2_02 REJECTED MEMBER INFORMATION IS NOT COMPLETE) so it will be fixed.
I will keep you informed once I hear any further information on it.
It is certainly true that it does fix the data loading problem but it seems to have spawned a new issue around the error logging.
Before the patch was applied and a rejected record was encountered then the error log would display…
Sorry if the image is small but the log says :-
Year,Measures,Product,Market,Scenario,Data,Error_Reason
'Mar','Sales','100-10','New York1','Actual','720','Cannot end dataload. Analytic Server Error(1003014): Unknown Member [New York1] in Data Load [1] Records Completed'
After the patch has been applied loading the same data produces the following error log.
Year,Measures,Product,Market,Scenario,Data,Error_Reason
'Mar','Sales','100-10','New York1','Actual','720','Loading failed with error [3303]'
You do get the actual record that was rejected but not the member, this is fine if only a couple of records are rejected but if you have a large number of rejections and need to process them then this becomes a real pain.
This bug has been logged with Oracle (Bug 8912703 - AFTER PATCH 10.1.2.3.5.2_02 REJECTED MEMBER INFORMATION IS NOT COMPLETE) so it will be fixed.
I will keep you informed once I hear any further information on it.
Sunday, 6 September 2009
Using Lifecycle Management (LCM) with ODI
It has been a while since my last blog, lately I have not had much chance to sit down and dedicate the time required to research and write a detailed blog but here goes.
I have been meaning to write a blog on combining the benefits of LCM (Hyperion version 11) and ODI for a while now, the beauty of LCM is that you can automate loading and extracting of objects from most aspects of the EPM suite of products, this gave me an idea of how to incorporate this with ODI.
If you take for example the loading of cell text into a planning application, in the past this has always been a pain but with LCM you can load cell text easily, say you have a csv template with the cell text and you wish to automate the process of loading this into a planning application, if you want to use LCM then you would have to format it into the correct XML file format. I want to go down the route of just supplying the csv file and not mess around with having to format it into XML; this is where ODI comes into play.
I am going to break this down into sections and provide the usual step by step process, this example is just looking at cell text but as LCM uses the same principle you could easily use the concept for extracting/loading other information.
I am going to use everybody’s favorite sample planning application to add a line of cell text.
We now need to use LCM to export cell text to have a look at the output XML file, this is done by logging into Shared Services, expand Application Groups and then the group you hold the planning application against which in my case is called Planning, select the planning application and this will open the LCM artifact list, expand relational data and select “Cell Texts”
Define the migration and set the destination as file and give it a name.
Once this has been executed it will create an xml file on the server, usually in the directory \hyperion\common\import_export\<account migration run against>\<migration name>\....
If you open the xml file you will see the format LCM uses.
Now what I want to do is provide the following csv file.
Then automatically log it into the sample planning application.
ODI is pretty good at handling XML formats and I have touched on using it in the past, it has the ability to reverse engineer an xml file and generate Datastores based on the xml structure, you can then load information from the file or generate a fresh populated xml file.
The first thing to set up is the connection details to the XML file in the topology manager.
Right click XML and “Insert Data Server”
Within the JDBC section select the “Sunopsis JDBC Driver for XML” driver
In the URL you need to provide details to the XML file, the outputted xml file from LCM was named “Cell Texts.xml”, I moved this file and renamed it to remove the space in the filename, I never like having spaces in file names as it can sometimes cause problems.
The format to point to a file is :- jdbc:snps:xml?f=
You can use the “Test” button to make sure ODI can connect to the file.
By default ODI uses its memory engine to store the relational schema mapping of the XML schema, this is fine in most cases though if you had a large complex xml file then it could run into problems, it is also possible to store this information in an external database, I am going to go down the database route as it is easier to show what is going on behind the scenes.
To use an external database you can specify all the parameters in a properties file, an example of a properties file is in \OraHome_1\oracledi\demo\xml\properties, the properties file will need to exist in a location that ODI can reference so if you are using a local agent then it will be fine in the drivers directory, if you are using an agent service in windows then you will have to update the wrapper configuration file which I will go through shortly.
Basically you can use the four parameters I have highlighted as these are mandatory, you will just need to update the driver/url to match your environment, the schema relates to the Oracle schema which is the same as the username, the password can be encoded by running the following command from the oracle\drivers directory.
java -cp snpsxmlo.jar com.sunopsis.jdbc.driver.xml.SnpsXmlEncoder password
If you want to find descriptions for each of the parameters it is available in the documentation: - OraHome_1\oracledi\doc\webhelp\xml\index.htm
To reference the properties file the filename minus the .properties needs to be added to the url in the data server configuration, the parameter is dp_props
Once you apply the Data Server settings a physical schema window will open and you should be able to select schemas from the dropdown.
Then it is just a matter of applying it to a context and giving it a logical schema name.
As I said earlier the properties file needs to be in a location that ODI can reference e.g. \oracledi\drivers which is fine if you are using a local agent, I am using an agent created as a window service so the file OraHome_1\oracledi\tools\wrapper\conf\snpsagent.conf requires an extra line in the classpath reference
Once the file has been updated the agent windows service requires restarting to pick up the reference to the file.
This completes the configuration and we can move on to the designer to reverse the xml file into a model.
Create a new model, set the technology as XML and pick the logical schema that was created.
In the Reverse section just select the correct context and you are ready to reverse.
The structure of the XML file will be converted into DataStores, you can view how each DataStore relates to each other by expanding Hierarchy.
If you have a look in the repository you will see a number of tables will have been created, most of them relating to the DataStores
The table SNPSRDBIDGEN holds all the table names
The other tables relate to the different elements of XML file, for instance DIMMAPPING holds all the dimension mapping information, if you look at the XML file and the database table you will get an understanding of how it hangs together.
The above table we don’t really have to mess around with as it is going to stay static, the table that we are interested is CELLT_CELLNOTE as this holds the cell text information.
The XML file has the following information
The database table contains
Most of the information is self-explanatory, the extra information are just IDs and they link to the other tables, if you are loading information against one plan type then the IDs will stay pretty static and the only information that needs updating is the dimension intersection for the cell text and the actual cell text value.
If you need to get a better feel of how the tables the tables are populated it might be worth entering more cell text and against different plan types then follow the LCM process of exporting to XML, once you have the XML file in place it is possible to use a command to read the file and load it straight into the repository tables.
This is done by creating an ODI procedure.
Executing the procedure will load the information from the XML file directly into the repository tables.
So basically what I need to do is populate the CELLT_CELLNOTE table with the information from the csv file that holds the cell text information, to achieve this it can be done with a simple interface file to SQL.
I am not going to go over how to set up the file information in ODI as it has been covered in the past, if you are unclear read back into the early days of my blog, quick overview:- set up File Data Server in the topology manager or use an existing one, create a new DataStore against a File Model, reverse the DataStore and you should end up with something like :-
The table CELLT_CELLNOTE will also need to be reversed into a DataStore.
Ok now to create an interface to load the CSV information into the database table.
Mapping the interface is just a matter of comparing the records in the db table CELLT_CELLNOTE and assigning against a dimension in the source. e.g. DIM1 has Actual as the value so this maps to Scenario.
After the dimension columns have been mapped it is just a case of hard coding the numeric values so for instance DIM1ORDER is 0 in the db table and it will stay static so it can be hard coded with that value in the target, this is the case for all the other ORDER columns, the only element that could possible change is the PLANTYPEFK but as in this example it is against one plan type it will always be a static value of 0.
In the flow option truncate is used to clear down the table before loading the new records.
After executing the interface you can see the records from the CSV file have been populated into the CELLT_CELLNOTE database table.
Ok, so now we have the values we want in the cell note table and all the other tables are static so don’t need touching, what we need to do now is recreate the XML file with the new information, this can be done again using an ODI procedure.
This time the command SYNCHRONIZE FROM DATABASE is used which basically dumps the information from the database tables and recreates the XML file based on the information
The XML file has been recreated and populated with the information from the original CSV file, now all that is left is to use LCM again to load it back into planning.
I am going to manually do it for today so that means renaming the xml file back to “Cell Texts.xml” and moving it back to the LCM file directory
In Shared Services it is just a matter of selecting the same migration name under Application Groups > File System
Run through the options and execute the migration.
And there you have it! Cell text that originated from a simple csv file format loaded straight into planning, this concept can be used for any of the areas of LCM you are not restricted just to thinking about cell text.
I must point out if you are going to follow this example use “NoSegment” and not “No Segment” as I have shown as it will fail because you can’t use aliases when using LCM it needs to be the member name, I am too lazy to go back and change the images.
Next time I will go through automating this process and touch on using the LCM API via ODI to execute the LCM import.
I hope you have found this useful, until next time!!
I have been meaning to write a blog on combining the benefits of LCM (Hyperion version 11) and ODI for a while now, the beauty of LCM is that you can automate loading and extracting of objects from most aspects of the EPM suite of products, this gave me an idea of how to incorporate this with ODI.
If you take for example the loading of cell text into a planning application, in the past this has always been a pain but with LCM you can load cell text easily, say you have a csv template with the cell text and you wish to automate the process of loading this into a planning application, if you want to use LCM then you would have to format it into the correct XML file format. I want to go down the route of just supplying the csv file and not mess around with having to format it into XML; this is where ODI comes into play.
I am going to break this down into sections and provide the usual step by step process, this example is just looking at cell text but as LCM uses the same principle you could easily use the concept for extracting/loading other information.
I am going to use everybody’s favorite sample planning application to add a line of cell text.
We now need to use LCM to export cell text to have a look at the output XML file, this is done by logging into Shared Services, expand Application Groups and then the group you hold the planning application against which in my case is called Planning, select the planning application and this will open the LCM artifact list, expand relational data and select “Cell Texts”
Define the migration and set the destination as file and give it a name.
Once this has been executed it will create an xml file on the server, usually in the directory \hyperion\common\import_export\<account migration run against>\<migration name>\....
If you open the xml file you will see the format LCM uses.
Now what I want to do is provide the following csv file.
Then automatically log it into the sample planning application.
ODI is pretty good at handling XML formats and I have touched on using it in the past, it has the ability to reverse engineer an xml file and generate Datastores based on the xml structure, you can then load information from the file or generate a fresh populated xml file.
The first thing to set up is the connection details to the XML file in the topology manager.
Right click XML and “Insert Data Server”
Within the JDBC section select the “Sunopsis JDBC Driver for XML” driver
In the URL you need to provide details to the XML file, the outputted xml file from LCM was named “Cell Texts.xml”, I moved this file and renamed it to remove the space in the filename, I never like having spaces in file names as it can sometimes cause problems.
The format to point to a file is :- jdbc:snps:xml?f=
You can use the “Test” button to make sure ODI can connect to the file.
By default ODI uses its memory engine to store the relational schema mapping of the XML schema, this is fine in most cases though if you had a large complex xml file then it could run into problems, it is also possible to store this information in an external database, I am going to go down the database route as it is easier to show what is going on behind the scenes.
To use an external database you can specify all the parameters in a properties file, an example of a properties file is in \OraHome_1\oracledi\demo\xml\properties, the properties file will need to exist in a location that ODI can reference so if you are using a local agent then it will be fine in the drivers directory, if you are using an agent service in windows then you will have to update the wrapper configuration file which I will go through shortly.
Basically you can use the four parameters I have highlighted as these are mandatory, you will just need to update the driver/url to match your environment, the schema relates to the Oracle schema which is the same as the username, the password can be encoded by running the following command from the oracle\drivers directory.
java -cp snpsxmlo.jar com.sunopsis.jdbc.driver.xml.SnpsXmlEncoder password
If you want to find descriptions for each of the parameters it is available in the documentation: - OraHome_1\oracledi\doc\webhelp\xml\index.htm
To reference the properties file the filename minus the .properties needs to be added to the url in the data server configuration, the parameter is dp_props
Once you apply the Data Server settings a physical schema window will open and you should be able to select schemas from the dropdown.
Then it is just a matter of applying it to a context and giving it a logical schema name.
As I said earlier the properties file needs to be in a location that ODI can reference e.g. \oracledi\drivers which is fine if you are using a local agent, I am using an agent created as a window service so the file OraHome_1\oracledi\tools\wrapper\conf\snpsagent.conf requires an extra line in the classpath reference
Once the file has been updated the agent windows service requires restarting to pick up the reference to the file.
This completes the configuration and we can move on to the designer to reverse the xml file into a model.
Create a new model, set the technology as XML and pick the logical schema that was created.
In the Reverse section just select the correct context and you are ready to reverse.
The structure of the XML file will be converted into DataStores, you can view how each DataStore relates to each other by expanding Hierarchy.
If you have a look in the repository you will see a number of tables will have been created, most of them relating to the DataStores
The table SNPSRDBIDGEN holds all the table names
The other tables relate to the different elements of XML file, for instance DIMMAPPING holds all the dimension mapping information, if you look at the XML file and the database table you will get an understanding of how it hangs together.
The above table we don’t really have to mess around with as it is going to stay static, the table that we are interested is CELLT_CELLNOTE as this holds the cell text information.
The XML file has the following information
The database table contains
Most of the information is self-explanatory, the extra information are just IDs and they link to the other tables, if you are loading information against one plan type then the IDs will stay pretty static and the only information that needs updating is the dimension intersection for the cell text and the actual cell text value.
If you need to get a better feel of how the tables the tables are populated it might be worth entering more cell text and against different plan types then follow the LCM process of exporting to XML, once you have the XML file in place it is possible to use a command to read the file and load it straight into the repository tables.
This is done by creating an ODI procedure.
Executing the procedure will load the information from the XML file directly into the repository tables.
So basically what I need to do is populate the CELLT_CELLNOTE table with the information from the csv file that holds the cell text information, to achieve this it can be done with a simple interface file to SQL.
I am not going to go over how to set up the file information in ODI as it has been covered in the past, if you are unclear read back into the early days of my blog, quick overview:- set up File Data Server in the topology manager or use an existing one, create a new DataStore against a File Model, reverse the DataStore and you should end up with something like :-
The table CELLT_CELLNOTE will also need to be reversed into a DataStore.
Ok now to create an interface to load the CSV information into the database table.
Mapping the interface is just a matter of comparing the records in the db table CELLT_CELLNOTE and assigning against a dimension in the source. e.g. DIM1 has Actual as the value so this maps to Scenario.
After the dimension columns have been mapped it is just a case of hard coding the numeric values so for instance DIM1ORDER is 0 in the db table and it will stay static so it can be hard coded with that value in the target, this is the case for all the other ORDER columns, the only element that could possible change is the PLANTYPEFK but as in this example it is against one plan type it will always be a static value of 0.
In the flow option truncate is used to clear down the table before loading the new records.
After executing the interface you can see the records from the CSV file have been populated into the CELLT_CELLNOTE database table.
Ok, so now we have the values we want in the cell note table and all the other tables are static so don’t need touching, what we need to do now is recreate the XML file with the new information, this can be done again using an ODI procedure.
This time the command SYNCHRONIZE FROM DATABASE is used which basically dumps the information from the database tables and recreates the XML file based on the information
The XML file has been recreated and populated with the information from the original CSV file, now all that is left is to use LCM again to load it back into planning.
I am going to manually do it for today so that means renaming the xml file back to “Cell Texts.xml” and moving it back to the LCM file directory
In Shared Services it is just a matter of selecting the same migration name under Application Groups > File System
Run through the options and execute the migration.
And there you have it! Cell text that originated from a simple csv file format loaded straight into planning, this concept can be used for any of the areas of LCM you are not restricted just to thinking about cell text.
I must point out if you are going to follow this example use “NoSegment” and not “No Segment” as I have shown as it will fail because you can’t use aliases when using LCM it needs to be the member name, I am too lazy to go back and change the images.
Next time I will go through automating this process and touch on using the LCM API via ODI to execute the LCM import.
I hope you have found this useful, until next time!!