Saturday, 31 December 2016

FDMEE and the REST is up to you – Part 1

I have previously written a couple of posts around FDMEE Web Services and at the time of writing them the options for on-premise did not include any RESTful services.

The first part covered Web Services available in FDMEE using the SOAP protocol which requires the arduous task of configuring Oracle Web Services Manager (OWSM), the amount of effort involved has never made this a viable option and personally I prefer REST over SOAP for ease of use.

The post also covered an alternative way of executing processes in FDMEE by calling a Java Servlet which behind the scenes is basically the same method the command batch scripts use.

In the second part I went through the REST resources available at the time for Data Management in PBCS.

With the release of FDMEE 11.1.2.4.210 many of the EPM cloud features have finally been pushed down to on-premise, the readme for the FDMEE 210 PSU does not include any reference to the REST API and the full documentation is yet to be released.

The good news is that the REST API has made its way down to on-premise and since I wrote the original piece on the cloud there have been some additional resources added to the API.

I feel this is a good opportunity to go through what is available in terms of the REST API for both on-premise and cloud.

In this first part I will show how simple it is to execute data load rules and batches using the REST API, now I did cover this previously for the cloud and it is pretty much the same principal for on-premise but I thought I would go over it again to clear any confusion.

In the examples I will use a REST Client (boomerang for Chrome) and for scripting PowerShell, I have said this before but I have only picked them because I find them easy to use and good for demonstration purposes, the beauty is there are many different clients and scripting languages that can take advantage of REST so just use the one you are most comfortable with.

The URL structure for accessing the REST resources is:

http(s)://<webserver>:<port>/aif/rest/<api_version>/<path>

The API version is currently V1 for both cloud and on-premise

The path identifies the resource and for running rules and batches the path is jobs

To test the resources are accessible using a REST client a GET request can be made to:

http(s)://<webserver>:<port>/aif/rest/V1


Before sending, the header of the request will require authentication information containing the username and password of a provisioned FDMEE user.


The REST client automatically encodes the credentials into base64 and adds them as a basic authorization header.


After sending the request, a valid response should contain the REST API version information in JSON format.


So let us go through an example of running an FDMEE data load rule.

I have the following data load rule which has a source file and target planning application.


Another of the new features in the 210 PSU is the ability to load non numeric data, once again I have covered this is in the cloud but this is a good time to test out the functionality for on-premise FDMEE.

In the target options of the load rule there is the option to set the load method to “All Data Types” and the date format for date data.


For this example, I have created a Text, Date and Smart List member in the target planning application.


The source file has the three members and data with the equivalent data type


There is also a form to verify that the non-numeric data has been loaded correctly.


Now that everything is in place I can look at using the REST resource to execute the load rule.

The resource uses a POST method and has a number of input parameters which are required to be in JSON format in the body of the request.

The parameters are:

jobType which for load rules will always be “DATARULE”

jobName will be the name of the data load rule.

startPeriod will be the first period for the data.

endPeriod will be the last period the data.

importMode defines how the data is imported into FDMEE. the possible values are: APPEND, REPLACE, RECALCULATE, NONE

exportMode defines how the data is loaded into the target application, the possible values depend on the target application.

For planning these are STORE_DATA, ADD_DATA, SUBSTRACT_DATA, REPLACE_DATA, NONE

If your target application is HFM then is where you hit an issue, as the REST resources have been pushed down from the cloud and there is no HFM in the cloud then all export options are not available, well I think they are not there, I will update this post if I find them or if anything changes.

The available values are REPLACE, MERGE, NONE.

This means that accumulate and replace by security are not there, if you need to use those export values then as a workaround you can create a batch in FDMEE and use the REST resource to run the batch.

fileName (optional) if not supplied the file which is defined in the data load rule will be used.

For the data load rule I am going to execute the parameters translate into the following:


The request can be sent and a response will be returned with parameters and values in JSON format which provide information about the job.


The jobID parameter is the equivalent to the process ID in FDMEE.

jobStatus will contain either “RUNNING”, “SUCCESS”, “FAILED

status will be one of the following values: -1 = in progress; 0 = success; 1 = error; 2 = cancel pending; 3 = cancelled; 4 = invalid parameter

The job status has been returned as running, to keep checking the status a GET request can be made to the URL held in the href parameter.


The job status has now been returned as “SUCCESS” so the full process is complete.

If the job ID is removed from the URL then all job statuses will be returned.

If you take a look in process details in the FDMEE UI then you can see that the process ID and job ID do match and the status is the same.


Back to my form in planning and the non-numeric data has been loaded correctly, so that is two pieces of new functionality tested in one go.


Converting this into a script is a simple task, with not much effort and a small amount of code an FDMEE data load rule can be executed.


To keep checking the status the URL in the returned href parameter can be stored and then a request made using the URL.


So now rules can be easily executed using the REST API.

I know I am going to get asked about uploading files to the FDMEE directory because it is possible using REST in the cloud, currently this is not possible as the upload/download functionality in the cloud uses the migration REST API and this has not made it down to on-premise, I am not sure if it will or not because of the difference between the directory structure in the cloud compared to on-premise, we will see.

Really it shouldn’t be too much of a problem though with on-premise as the FDMEE directories are on a file system so there are many ways to transfer files using scripting languages.

Moving on to the next REST resource and that is the ability to run batches.

For my example I have the following batch created which just executes the same load rule as the previous example.


The REST resource for running batches is the same URL as for running data loads and once again with a POST method.

Only a couple of input parameters are required in the body of the request to the resource.

jobType which for load rules will always be “BATCH”

jobName will be the name of the batch.

For my example this translates to the following request in the REST client:


Just like with the data rule a response is returning containing all the job information.


The response will only contain the details of the main batch process and not the ID of the jobs that are part of the batch, this is not really a problem as you would only really be interested in the status of the overall batch process.


As the job ID of the batch is returned and you will already know the order of the jobs then you could easily check the details of each of the jobs in the batch.


In terms of scripting then the code would be pretty much similar to running a data load and the only difference being less parameters are required in the body of the request.


Using a script is also nice and simple way of monitoring the status of FDMEE processes without having to log into the UI.


One thing that would be beneficial that is not included in the REST responses is the start and end times of the processes, maybe Oracle will include them at some point in the future.

Right, I am going to leave it there for today and in the next part I will cover importing/exporting mappings and running reports using REST.

Monday, 19 December 2016

FDMEE/Data Management - loading data by multiple periods and years

Over the past six months there have been a number of changes in the way and the type of data you can load through Data Management in EPM Cloud, I am only referring to the cloud as these enhancements have not yet made it down to on-premise FDMEE.

Update 23rd December 2016 - All the functionality in this post is now available on-premise in FDMEE PSU 11.1.2.4.210

In the PBCS 16.07 update document there was the following new feature:
“Data Management added a new All Data Types Data Load method which enables you to load numbers, text, Smart lists, and date.”

In the 16.09 release there was:
“The text-based data load feature enables users to load text data, dates, and smart lists to applications that support these data types.”

You tell me what the difference is between 16.07 and 16.09 because they look the same to me :)

In the 16.10 release there was:
“Data Management now supports a multi-column load of numeric data for any dimension. Prior to this update, Data Management supported only the loading of multi-period column data. In addition, a multi-period load using the header record is now available.”

Basically I have covered the above functionality in a previous blog which you can read all about here

The previous post was aimed at loading text data but the same logic can be applied for numeric data, the only difference would be the file type selection in the import format.

If you look at the import format in the 16.07 release you will see that the “All Data Type” was introduced for fixed, delimited and multi column source files.


At that point there was “Multi Period” available for numeric data where you could only specify period columns in the import format mapping, fast forward to the 16.10 release and you will see that it was replaced with “Multi Column – Numeric Data”.


Currently in the on-premise 11.1.2.4.200 FDMEE version you only have the following file type options:


I am sure this will be change in the future, maybe when .210 patch finally lands.

Update 23rd December 2016: Functionality now available on-premise with FDMEE 11.1.2.4.210

Import format file type options from FDMEE 11.1.2.4.210:



Moving on, there is now another new way of loading data which arrived in the 16.12 cloud release and FDMEE 11.1.2.4.210:

EPM Cloud update document:
“Data Management now supports Oracle Financial Consolidation and Close Cloud “Period” dimensions as columns in a data file. If you have data for multiple periods in a single file, then you can include the year and period on each row of the data file that gets loaded to the target application.”

FDMEE 11.1.2.4.210 readme:
"Data files can include data for multiple periods. To support this, columns for period, year and period number are available to add in the import format."

Notice that the functionality in the EPM Cloud update document is only specified for FCCS, how about PBCS? Well I will get on to that later.

So now it should be possible to specify the period and year in the rows of a source data file, to test out this new piece of functionality I will go through a simple data loading example.

I am going to start out with the following source file which has a column for period/year and multiple periods in the rows.


The objective is to load this file to Data Management, map the data to valid members and then load the data to a target FCCS application.

I have created a form to verify whether the data has been correctly loaded.


In order to map the period/year in the source file to the target application I have created source period mappings and assigned them to a calendar which will be selected in the data load rule.


On to the import format and before the 12.16 release there were the following options available when selecting the Add button.


Now there is a new drop-down option of “Source Period Row” where Year, Period or Period Number can be added.


These will then be added to the import format grid and the field number can be mapped to the source file.


In the data load rule the import format was select, the source file was uploaded and selected, the period mapping type was set to explicit and the calendar that was created earlier selected.


I am not going into the data load mappings as they were straight forward explicit mappings for account and entity members contained in the source file.

To load the data, you execute the rule from the data load rule screen as the workbench is currently fixed by the period/year that is set in the POV.

Even though my source file has data from January to December 2017 I am only going to select to load from January to June 2017.


Once the rule has been executed process details confirms the load from source to target was successful.


In the process steps I have no idea why the months are all out of order because if you take a look in the process log you will see they are processed in the correct order.


In the log you can also see that the rows that are not in between the start and end period are rejected in the data load.


In the target options of the data load rule I had selected not to purge the data file that is created by Data Management and then loaded into the target FCCS application.

It is possible to download the file from the outbox to view the format of the file that is loaded to the target application.

If you have ever been involved with on-premise planning, then you will see that the file is in the outline load utility format and viewing the process log that it is using the utility to load the data.


The form I created earlier is now correctly displaying the data that was loaded.


Using this new method certainly makes life much easier to load data across multiple periods and years.

In the 16.12 cloud update document, it was stated that this new functionality was only available for FCCS but I was interested to know if it is available in PBCS.

I created an import format against a PBCS application and the “Source Period Row” option was available.


As the option is available then this would suggest it is possible to load from a source file containing periods and years in the rows.

I had to test out whether the process would be successfully against a PBCS application.

Once again I started out with a simple source file spanning multiple periods and years.


The period/year were added and mapped in the import format.


I am not going to show them but I created source period mappings, a location and data load mappings.

The data load rule was set up in the same as the previous example.


The rule was executed against with a start and end period to match the source file contents.


The full process of importing, mapping and exporting to the target was successful.



For demonstration purposes I created a form to show that the data was correctly loaded to the target PBCS application.


So even though PBCS was not mentioned in the update documentation the functionality is there and it is possible to load from a source file where the periods and years are defined in the rows.

Update: This functionality can be used against EPM cloud and on-premise.

Sunday, 27 November 2016

Managing substitution variables using EPM Automate and REST

I was recently asked if it is possible to use the REST API in EPM cloud to manage substitution variables, at the time there was no documentation advising that it was possible but the clue was that it is available using the EPM Automate utility.

The rule I usually apply is if it is available in the EPM Automate utility, then there should be an equivalent REST resource as the utility is built on top of REST APIs.

In the next release (16.12) there is a documented new feature -
“You can now use REST APIs to get and set substitution variables”

I am assuming this just means the REST API documentation is going to be updated as the resources are already there. (update: this is exactly what has happened)

In this post I am going to explore the options that are currently available with the EPM Automate utility and REST API to manage substitution variables, at the same time I am going to apply this to one of the many scenarios where this might be beneficial.

If you are not already aware there is functionality in both cloud and on-premise to create a rolling forecast in the form designer, there is a possibility you have not seen this before so I will quickly go through setting it up.

Once you drop Year and Period dimension into a column the right click option of “Rolling Forecast Setup” appears.


This will open the forecast setup window where you can define the prefix, start period/year and number of periods.

In my example I am going to create the forecast starting from Jan 2017 for 12 periods with a prefix of Roll


After generating Essbase substitution variables will be created and the form design will be populated with them.


So in my example 12 period and year substitution variables have been created using the defined prefix.


Using the inbuilt functionality, the sub vars are always created at application level.

Opening the form displays twelve periods starting from Jan 2017.


To update the sub vars to say shift a month forward you can right click the period or year in the form and select “Set Rolling Forecast Variables


This will open a window with the forecast variables and the current values.


If I shift the values by one the variables values will be updated in the window.


Applying this will then update the Essbase sub vars and the form will now reflect this.


This all well and good if you like going into the form and shifting the sub var values or manually updating them through the variables section in the user interface, I prefer to use automation than live in a world of manual repetitiveness so this is where using either the EPM Automate utility or the REST API can help.

Let us start with the EPM Automate utility.

There are two commands currently available for managing sub vars through the utility:

setsubstvars - creates or updates one or more substitution variables at the Planning application or at cube level (added 16.04)

getsubstvar - enables you to view the current value of a substitution variable or the value of all variables at the cube or application level (added 16.11)

The syntax for setsubstvars is:

epmautomate setsubstvars CUBE_NAME|ALL SUBSTVAR=VALUE [SUBSTVAR=VALUE]

CUBE_NAME can be set as an individual cube name or by using ALL to set at application level, a single or multiple sub vars can be created/updated in in one command.

For example:

epmautomate setsubstvars ALL CurYear=FY16 CurPeriod=Dec

or at cube level

epmautomate setsubstvars Plan2 CurForecast=FY17

If I take an example using the rolling forecast variables that were created earlier.


The variables "RollPer1" and "RollPer2" could be shifted forward a month by using the utility to set them to "Mar" and "Apr"

epmautomate setsubstvars ALL RollPer1=Mar RollPer2=Apr



To check the variables have been set then the getsubstvar command can be used.

The syntax for the command is

epmautomate getsubstvar CUBE_NAME|ALL [name=VARIABLE_NAME]

Once again variables can be retrieve either at application level using ALL or by specifying the cube, there is also the option to specify the variable name to return the value for.

To return the value of the "RollPer1" variable which was just updated then you can simply use

epmautomate getsubstvar ALL name=RollPer1



To return all the variables at application level the following syntax can be used.


If you wanted to return variables that have been assigned at cube level, then you would just change ALL to the cube name.

Unlike with the set command it doesn’t look like you can define multiple variables as it returns all variables.


For the majority of situations, the EPM automate utility is perfectly acceptable for managing variables but if you want more flexibility over them or build into a current process then this is where the REST API can help.

The REST resources are available through the following URLs

For application level:

https://epm_cloud_instance/HyperionPlanning/rest/{api_version}/applications/{app_name}/substitutionvariables

For cube level

https://epm_cloud_instance/HyperionPlanning/rest/{api_version}/applications/{app_name}/plantypes/{cube_name}/substitutionvariables 

You can see the URL contains “plantypes” which is the way Essbase databases have been known in the planning world for a long time, there is now a transition to these being called cubes instead of plan types, in reality they the same thing but there will be a crossover period where there be reference to both naming conventions.

If you are retrieving variables, then you would use the GET method and if you are creating/updating then you would use the POST method.

When creating/updating variables the body of the POST requires the sub var details provided in JSON format.

The format for the JSON should be:

{  
   "items":[  
      {  
         "planType":"ALL|CUBE_NAME",
         "name":"SUB_VAR_NAME",
         "value":"SUB_VAR_VALUE"
      }
   ]
}

Basically there is an array called “items” which contains objects holding the name/value pairs for the sub var definition.

Using the REST resource, the value of the sub var “RollPer1” can be updated to “Apr” which I will demonstrate with a REST client.


If after a posting a success response is returned then this indicates there were no problems with the create/update of the variable.


To verify this, REST can be used again using a GET method to retrieve the variable details in JSON format, to return a single variable then the name of the variable can be added to the end of the URL.


To return all the variable at application level then you just wouldn’t include the variable name:


The same principal goes for cube level and the only difference is a slight change to the URL.


To create/update multiple variables the JSON just requires additional objects and sub var definitions adding to the “items” array.


To verify this a GET request can be made including the cube name in the URL.


Going back to where I started with this post and the rolling forecast functionality in the UI with the ability to shift periods, I thought I would put together a script to replicate this functionality and provide a fully automated solution.

This is just an example using PowerShell and no doubt it could be simplified further or written using different methods or scripting languages, the beauty is there are so many ways to achieve the same end result and it places the power in your own hands.

I have reset the sub vars back to the way they were so a start period of Jan and a start year of FY17, the example is based on a financial year starting in January.

The objective is to read in the current sub vars based on the prefix and then shift the periods/year depending on the value defined in a variable.

I am going to break the script into chunks and give an overview of what is happening, I am not including the authorisation header information as I have included that in previous posts and if you want to understand how it is done just revert back to them.


The above section defines the prefix of the sub vars which is “Roll” and they are set at application level, the number of periods to shift will be forward by one.

The sub vars REST resource is invoked using a GET method and the returned items are then stored.

The current value of all the rolling forecast variables are then displayed.


Next the number of months, start period and year are stored in variables.


I have displayed the variables to highlight they have been retrieved and stored correctly.


The next section loops through all the months in the forecast so in this example that will be twelve times.

During the first loop the start period and year are converted into date format and stored.

The date is then shifted by x number of months so in my case that will be one.

The date is then converted back in the correct format for the sub vars e.g. period=Feb and year=FY17

The converted dates are then stored and the loop continues until completed.


At this point if we take a look at what has been stored we can see the variable values have now been shifted forward by one.


The final section of the script creates an object with all the updated sub var information and converts that into JSON format.

A post is then made to the REST resource with the body containing the formatted JSON.


I have outputted the body of the post to show that the JSON is in the correct format.


Running another GET request to the REST resource verifies all the sub vars have been shifted forward by one month.


Opening the rolling forecast form also confirms that the all the variables have been correctly set.


Now I have an automated solution to manage the rolling forecast without the hassle.

So there we go, hopefully this post has provided a valuable insight into some of the options available to simplify the management of substitution variables.