Tuesday, 1 August 2017

FDMEE – Custom Scheduler

If you have ever had to work with the scheduler in the FDMEE UI then you will understand that it is far from perfect.

The main issues I find are that the UI doesn’t t have all the scheduling options that are available through the ODI studio, if you were not aware the scheduling defined in FDMEE is handled by ODI.

It is not possible to view the active schedules through the FDMEE UI and there is no option to delete individual schedules as it is an all or nothing situation.

I am sure these irritations will improve over time and yes there are workarounds like using the Studio but that is not a viable solution for everyone.

So on to the reason for this post, I was challenged to come up with a scheduling solution in on-premise FDMEE that met the following criteria:
  • Ability to schedule batches.
  • Define a start and end date and repeat every x minutes/hours.
  • Update existing scheduled batches.
  • Delete individual batches.
  • Run from the UI or from a batch script which can be run anywhere.
  • No access to ODI Studio (ODI Console does not have scheduling functionality)
Let us go through the criteria to see if it is currently possible in FDMEE, well there is no problem to schedule batches through the UI.


It is possible to schedule metadata and data load rules, batches and custom scripts.

On to the next requirement and this is where the UI just doesn’t cut it, if you select schedule you are presented with the following options:


You can certainly set a start date and time but not an end date, also there are no repetition options.

If you compare the options to the ones available through the ODI Studio you will see FDMEE is lacking.


ODI has several repetition options which are not available in FDMEE.


This means the FDMEE UI does not meet the scheduler requirements and the use of the ODI Studio is not an option so how about a custom solution.

Well, ODI has a Java API which provides the ability to replicate pretty much anything you can do in the Studio, FDMEE has jython scripting so combine the two and things are looking up.

The good news is FDMEE has direct access to the ODI Java API so there is no need to mess around with jar files, just write some code and off you go.

I think it is probably best to run through the end solution and then break it down so show how it was done.

Three jython scripts were created and registered as part of the custom scheduler, these create, update and delete batch schedules.


If I execute the “Create Batch Schedule” script a popup window is displayed where the schedule can be defined.


The “Batch to schedule” parameter has a list box that returns the available batches.


The parameters that appear in the execute script window are defined in Custom Script Registration.


So when the custom script is run, the popup windows is displayed with the parameters that have been defined above.

You will see that “Batch to schedule” has been set to query type which means it will run a SQL query, the SQL query is created in Query Definition


The SQL query returns batches from the FDMEE database repository table AIF_BATCHES, this query is run when the lookup icon is selected in the execute script window.

After running the script and populating the parameter values it would look something like:


In the above example, the batch “OpenBatchDemo” will be scheduled to run every 30 minutes starting on the 1st August at 9am until the 5th August at 11pm.

The values are passed into the jython script and If the script runs successfully a custom message is displayed.


The process logs should also display the status of executing the custom script.


The FDMEE log associated with the process contains the schedule definition, this is handled by the jython script.


I will use the Studio to demonstrate what has happened on the ODI side, the scenario “COMM_EXECUTE_BATCH” which is the one FDMEE uses to run batches has had a new schedule created.


The schedule matches to the parameter values defined in the FDMEE UI though it does need an explanation of why it is created in this way.


Looking at the schedule you would think that it should have been created using “Active for the period” and a starting and ending date set, unfortunately I found out the hard way and that is not the way ODI operates when repetitions are involved.

When using repetitions the status in the definition tab should be set as “Active” and then the start date/time set as part of Execution like above.

The remaining settings are defined in the “Execution Cycle” tab.


Instead of setting an end date/time a calculation needs to be performed to work out the difference between the start and end dates, this value is used in “Maximum Cycle Duration”.

The time between each repetition is set as “Interval between Repetitions”, it is worth mentioning that ODI only considers the interval after a session completes to stop overlapping of sessions.

Say a scheduled batch starts at 9:00am and has a repetition interval of 30 minutes, if the batch takes 10 minutes to complete the next scheduled run will not take place until 9:40am and not what you might expect of 9:30.

If there is a strict time of when the repetitions need to start, then this option is not for you and alternatives need to be looked at.

In the variables tab of the schedule the required values including the batch name will have been set.


The creation of the schedule is all handled by the custom jython script and the ODI Java API.

I am not going to go through the jython script line by line as it would probably bore you to death and it is much better to write your own scripts in your style instead of copying them as you learn much more that way.

I do recommend looking through the ODI Java API documentation as that holds the key to working with the scheduler.

Just to recap when the custom script is run in FDMEE a window is displayed where parameter values are entered, these parameters match to those defined in the custom script registration.


The first part of the jython script retrieves and stores the values passed in from the FDMEE when the custom script is executed, this is done using the API method

getCustomScriptParameterValue(<parameter_name>)

The start date values are converted into a Java calendar which is a requirement for the ODI API.

The time in minutes between the start and end date is calculated which will be used to define how long the schedule will be active for.


The next section of the script deals with the connection to the ODI database repository, this information is retrieved using the FDMEE API method

getProfileOptionValue(<option_name>,<application_id>,<username>)


If you didn’t know the ODI profile option values are located within system settings.


The values are stored in an FDMEE database table called AIF_PROFILE_OPTION_VALUES and by using the getProfileOptionValue method these values can be accessed.


Next the name of the ODI scenario which will be scheduled is defined, there are three variables that are added to the scheduled scenario.


As explained earlier FDMEE uses the “COMM_EXECUTE_BATCH” scenario to run batches, the version is “001


The variables that are required for the scheduling of the batch scenario are the user that will run the batch, the name of the batch and the URL for the FDMEE web application, for example


The remaining part of the script deals with the creation of the schedule and to summarise:
  • The “COMM_EXECUTE_BATCH” scenario is located and stored.
  • A new schedule is created against the scenario.
  • The status of the schedule is set as active.
  • The start date, repetition interval and the duration of the schedule are set.
  • The scenario variables values are defined.
  • The information is then committed and the ODI agent schedule updated.

So now we have a custom schedule that has been simply created from the FDMEE UI, what about updating, deleting and viewing active schedules or carrying out these activities from outside of the UI? Well I think I have covered enough for now so look out for a follow-up post.

Sunday, 9 July 2017

EPM Cloud - Clearing data with EPM Automate and REST

There have been a few new additions to the EPM Cloud 17.07 release that I thought might be worth quickly running through.

There is a new version of the EPM Automate utility which includes a new command for clearing data, the new features and announcements document provides the following insight:

“A new version of the EPM Automate Utility is available with this update. This version includes the clearcube command, which uses the settings specified in a Planning job of type clear cube to delete specific data from input and reporting cubes.”

To coincide with this there is also the option to clear data using the REST API which I will get on to later.

There have also been some changes for clearing data in the UI, the documentation states:

“The Clear Cube job options have been updated to enable you to clear both Essbase and relational data. Previously you could only clear Essbase data. To view the updated Clear Cube job options, click Application, then Overview. From the Actions menu, select Clear Cube. On the Clear Cube page, click Create, complete the job name, select the cube, then select from the clear options displayed. Oracle has also provided two modes for clearing partial data for aggregate storage cubes. Selecting Partial Data (Basic Mode) enables you to use the member selector to specify the regions to be cleared. Selecting Partial Data (Advanced Mode) enables you to clear Essbase data by using an MDX expression in the text box provided.”

So basically relational data can now be cleared as part of the clear cube job and the options for clearing ASO data have been split into a basic and advanced mode.

Before the July release the options in clear cube for BSO were:


The options for an ASO cube were:


If Partial Data was selected then you could choose whether it was going to be a logical clear:

“In which the input cells in the specified region are written to a new data slice with negative, compensating values that result in a value of zero for the cells you want to clear. The process for logically clearing data completes in a length of time that is proportional to the size of the data being cleared. Because compensating cells are created, this option increases the size of the database.”

or a physical clear:

“In which the input cells in the specified region are physically removed from the aggregate storage database. The process for physically clearing data completes in a length of time that is proportional to the size of the input data, not the size of the data being cleared. Therefore, you might typically use this method only when you need to remove large slices of data.”

By using the member select you could define the area of the cube which you wanted to clear.


This would automatically generate MDX which was not always the best but at least it could be manually updated.


One of the problems with the clear cube data jobs was that if you had any relational type data like comments, supporting detail or attachments there was no option to clear them.

For example if I had the following data in a form which has a comment attached to a cell.


If the clear cube job was run the data would be removed but the comments would still remain.


Ok, so you could go to clear cell details from the navigator but that would mean defining the POV twice.

Moving swiftly on to the July release, let us see what changes have happened.

If you create a clear cube job you will see there is now the clear options where you can select from Supporting Details, Comments, Attachments and Essbase Data.


For ASO cubes, if Partial Data is selected then you have the new basic or advanced mode for defining the area of the database you would like to clear.

Using the Basic mode you can use the member selector to define where you want to clear data.


In the advanced mode you have to paste the MDX directly into the input box, if you select Basic mode first, define the POV and then switch to Advanced you will see the MDX that is going to be executed.


If you do switch to Advanced mode the relational type clear options will be deselected and unavailable, so keep watch out for that.

If you start with Advanced mode and enter an MDX statement:


Then switch to Basic Mode the dimension and members section will be blank, so it works going from Basic to Advanced but not the other way around which is understandable because it could be a complex MDX statement and that would be asking a little too much for it to be converted into the basic format.


It is possible to select any of the clear options, so for example if you only wanted to clear comments and not Essbase data.


Let us take a quick example with a form that has data and a comment.



After only selecting comments in the clear options and running the job only the comments are deleted.


Before the July release once you had created a clear cube job the only way to run it was by submitting it from the UI or setting up a schedule, what was missing was the ability to run the clear from outside of the UI or predefined schedule.

At last this is now possible and starting with the 17.07 EPM Automate utility there is a new command called “clearcube” available.


The new command is extremely basic and will only run a clear job already defined in the UI, when I heard there was going to be a new command I was hoping that it would be possible to set up the various clear options and even pass in the MDX using command parameters but unfortunately that is not the case, maybe that will happen in the future or maybe I am just asking for too much.

The command usage is:

epmautomate clearcube

is the name of a job defined in the application.

Time for another example and back to the same form with data and a comment.


The clear data job has been named “Clear Demo” and created with the following options:


Using the EPM Automate utility is as simple as issuing the command:

epmautomate clearcube “Clear Demo”



The command completed successfully and checking the jobs section in the UI we can see the job has been run.


Clicking on the job provides details on the options that have been executed as part of the clear.


Going back to the form and the Essbase data and comments have been cleared.


As the command to clear data is available in the EPM Automate utility this means it will also be available through the REST API,

If you have ever used the REST API to execute a job then it will be very familiar because it is the same resource just a slightly different payload.

The REST resource requires a POST to the URL format:

https://<cloud_instance>/HyperionPlanning/rest/{api_version}/applications/{application}/jobs

The payload will be in JSON and should include

jobType = CLEAR_CUBE

jobName = Clear cube job name

In my example this would equate to:

{  
   "jobType":"CLEAR_CUBE",
   "jobName":"Clear Demo"
}

Before testing out the REST resource I updated the form to include supporting detail.


The clear cube job was updated to include supporting details.


Using a REST client I can make a POST to the resource with a payload to run the “Clear Demo” clear cube job.


The response includes much more information than working with the EPM Automate utility, it includes the details on the clear options that have been run as part of the job.


The response also includes a URL so a GET request can be made to return information about the job, this is useful if the job is in a running state so it can be repeatedly checked until it has completed.


The response contains status information and the job was completed successfully, a check of the form and the data, comments and supporting detail have been removed.


Finally, we can turn this to the scripting world, before creating a script I updated the form to include an attachment.


The clear options for the job were updated to include attachments.


I created a simple script to make a POST to the jobs REST resource including in the payload the job type and job name.


The response confirms the job was successfully completed and a check of the form shows the data, comment, supporting details and attachment have been cleared.


Well that completes a look at the new clear cube functionality, hopefully you have found it useful, until next time.

Saturday, 1 July 2017

Change to Essbase forces auto shutdown of applications after being idle

I noticed a new configuration setting that appeared in the 11.1.2.4.018 Patch Set Update for Essbase, the readme has the following under the defect fixes and documentation update sections:

New Configuration Setting SVRIdleTime

A new setting has been added that will allow an application to auto shutdown after staying idle for n minutes. The syntax of the new configuration setting is


SVRIdleTime n


where is an integer value of 3 to 20160 (in minutes). The feature will be turn off if n is set to 0 (zero) and the default is on and it is set to 15 minutes. (26050466)


So basically once you installed this patch if any Essbase application has been idle for 15 minutes they will automatically shutdown, I can't say I am that impressed with the way this has been forced as on as default, there could be many reasons why you don’t want your applications to shutdown after being idle for 15 minutes so surely it would have been better to have the default as off and then enable it if needed.

Ok, you can change the idle time or disable the functionality by adding the setting to the cfg with a value of zero but it is something you might not be aware of when your system is patched and then suddenly you notice applications shutting down.

Anyway, I have patched Essbase and not added the new cfg setting so let us see if it is definitely being enforced.

[22:04:36 2017]Local/ESSBASE0///6264/Info(1056815)
Essbase 64-bit  - Release 11.1.2 (ESB11.1.2.4.018B016)

Start up everybody’s favourite Sample application and leave idle.

[22:05:03 2017]Local/Sample///6876/Info(1002035)
Starting Essbase Server - Application [Sample]

Wait 15 minutes and check the application log again.

[22:20:14 2017]Local/Sample///8376/Info(1013437)
Unloading application [Sample] after staying idle for 15 minutes

[22:20:14 2017]Local/Sample///8376/Info(1013399)
Canceling current requests.

[22:20:14 2017]Local/Sample///8376/Info(1013207)
RECEIVED SHUTDOWN COMMAND - SERVER TERMINATING

That confirms the new feature is being enforced by default.

Let us see if the default can be changed by adding the setting with a different value than the default.

SVRIdleTime 10

The Sample application was restarted and left idle.

[22:37:06 2017]Local/Sample///11352/Info(1013437)
Unloading application [Sample] after staying idle for 10 minutes

[22:37:06 2017]Local/Sample///11352/Info(1013399)
Canceling current requests.

[22:37:06 2017]Local/Sample///11352/Info(1013207)
RECEIVED SHUTDOWN COMMAND - SERVER TERMINATING

The setting is being picked up and applied correctly, it also confirms that only the application and not the Essbase agent needs restarting after adding or making changes to the setting.

I did check to see if it is possible to define at application level.

SVRIdleTime Sample 5

After restarting the application, the following error was generated in the application log.

[23:02:52 2017]Local/Sample///9256/Warning(1002134)
Invalid or Obsolete Configuration Setting - essbase.cfg(73): "SVRIdleTime Sample 5"

As there is no mention of it in the readme and because an error is being generated it looks like it is not possible to define the idle time at application level.

I also tested the setting with a value of zero and it does look to disable the auto shutdown on idle feature.

SVRIdleTime 0

So something to be aware of if you patch Essbase with 11.1.2.4.018+

Update 20/09/17 - Oracle have seen sense and changed the default idle time from 15 minutes to 120 minutes, the new default will apply from 11.1.2.4.020+ 

"26682267 - The default for the Essbase Configuration Setting SVRIdleTime has been increased from 15 minutes to 120 minutes as some operations would fail due to the system being idle."


Update 18/10/17 - Yet another update, from 11.1.2.4.021+ the default setting have been changed to 0 which means it is off,  looks like it can possibly cause corruption in ASO applications.

"26850650 - In some cases, when using the configuration setting, SVRIDLETIME, can cause an Aggregate Storage Application to become corrupt.  The default setting is now 0, which is off."

Update 11/01/18 - Just when you thought there would no more updates to this setting, now it seems to seems to be have disabled.

"27013042 - SVRIDLETIME has been disabled in Patchset Update 11.1.2.4.022."