Sunday 22 July 2018

Essbase REST API - Part 3

Moving on to part 3 of the series looking at the Essbase REST API which is currently only available in OAC. As I mentioned previously, most of the Essbase web UI is built on top of REST so there is there is a hell of a lot you can do with it, this also means it would be an endless task trying to cover everything, so I am going to pick some of the common management type tasks where adding in automation can assist further.

Historically carrying out everyday tasks would be done with MaxL, this is definitely still the case and there is nothing wrong with that. The problem I have always found with MaxL is that it can be restrictive, the output is not great, and it doesn’t interact so well with other scripting languages. This is where the REST API can come to the rescue and the good news is that it can be integrated with most scripting languages.

I am going to assume you have read at least the first part of this series, so you understand what the Essbase REST API is all about. I will stick with the same style examples using a mixture of a REST client and PowerShell, the beauty is you can pick whichever scripting language you feel most comfortable with, the example scripts I show are just there to give you an idea of what can be achieved with simplicity.

Let’s start off with a common task of managing substitution variables.

In the OAC Essbase UI sub vars can be managed through the variables tab.


To return the variables using the REST API is extremely easy.

Remember the URL structure to access the REST APIs will be:

https://<oac_essbase_instance>/rest/v1/{path}

To return all the sub vars at an application level then the format is:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/variables

and database level:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/variables

For example, using a REST client with GET request with the following URL will return all the subs vars for the Sample application, I am also suppressing the links for display purposes.


This will return the following variables in JSON format.


If you want to focus on a single variable then the name of the variable is included in the URL.



This time I have included the links which give you an indication of how to edit and delete sub vars.


Using a PUT method request, it is possible to update a variable, the body of the request requires the variable name and new value in JSON format. In the following example I am going to update the “currMonth” variable from “Jul” to “Aug”


The response shows the variable has been updated.


A quick check in the Essbase UI proves it has been updated.


A DELETE request to delete a variable at application level would be

https://<oac_essbase_instance>/rest/v1/applications/<appname>/variables/<subvarname>

or database level

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/variables/<subvarname>


A 204 status code will be returned if the deletion was successful.

Back in the Essbase UI, the variable has been removed.


To create a new variable then the URL format is similar, a POST request should be made and the body of the request should include the variable name and value in JSON format.


The response returns the newly created variable details.


In the Essbase UI the new variable is available.


In terms of scripting it is simple to manage sub vars, I am starting out with the same base script that I have used in the previous parts of this series.

The user and password are base64 encoded so they can be added to the header of the request.

The REST base URL and Essbase application are defined and are then combined to create the URL of the resource.

A request is made to return all the variables at the application level and the response is outputted to the console window.


To update a variable it is a similar script, this time the sub var name and new value are defined, these are then converted to JSON and added to the body of the request.

A PUT request is made and the response with the updated variable value is outputted to the console window.


To create a new variable, the only difference from the previous script would be POST request instead of a PUT.


To delete, again very simple with not much variation required.


So as you can see, it is not a difficult task to manage sub vars.

Moving on to another common maintenance task, which is managing filters.

To retrieve all the filters against an Essbase database, the following URL format is required:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters

An example using a GET request and suppressing links:


The response in JSON format will contain the names of all the filters assigned to the database.


To return a single filter the format is:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters/<filtername>

This time I am not supressing the links, the response provides you with the URLs and methods to manage the filters.


If I look at the same filter in the Essbase UI, the member specification and access level are displayed.


To return the same information using the REST API then a GET request is required with the URL format:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters/<filtername>/rows


The response as usual is in JSON format and is split by row definition like in the UI.


To update a filter definition a PUT request is required:


The body of the request should include all the required filter rows, any that are not included will be removed, basically it acts in the same way as a replace.

In the follow example I am going to only include one row which will mean the other existing rows will be removed.


A check back in the UI confirms the filter now only contains one row, alternatively I could have used the REST API to confirm this.


To delete a filter the URL format is the same except a DELETE request is required.


I have two filters created and say I wanted to delete the filter named “MTL-Filter”


A DELETE request would be made including the filter name in the URL


If the deletion was successful a 204 status will be returned.

In the UI you can see the filter is no longer available.


To create a new filter and define access rows then the format for the URL is the same as retrieving all the filters.

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters

A POST method is required, and the body of the request should include (in JSON format) the name of the new filter and the access definition rows.


In the UI you can see the filter and the access definition have been created.


In reality you wouldn’t be using a REST client to manage filters and you probably would want it automated using a script.

In the following example I have a text file and the filename is the name I want to create.


I did the delete the filter first as it is the same name as the one that already exists.

The content of the file contains the access rows.


The script defines variables for the Essbase application, database name and filename.

The filter name is extracted from the filename.

The REST URL is built from the variables.

The contents of the CSV file are read and then converted to JSON to be used in the body of the REST request.

A POST request is made to create the filter and definition.


Back in the Essbase UI you can see the filter is there with the access rows from the file.


If you have ever worked with MaxL then hopefully you agree it much simpler to manage filters using the REST API.

So, what about assigning permissions to a filter, well once again you can use the REST API to achieve this.

Before assigning the filter permissions, let us take a step back and first provide access to the Essbase application.

Currently there is only one user with access to the application.


To update permissions for an application then the REST URL format is:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/permissions/<user_group_name>

A PUT request is required, and the body of the request should include the user/group and role.


If successful, a 204 status code will be returned.

In the UI, the user has been granted with access to the Essbase application.


Alternatively, the REST API could return all the permission information for the specified application.


Back to the filter that was created earlier, now the user has been granted access to the application they can be assigned permission to the filter.

In the UI they would usually done in the permissions tab for the filter.


To achieve this with the REST API then a POST request should be made to the URL format:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters/<filtername>/permissions

The body of the request should include the user id or group you want to assign permissions to.


This can be simply achieved using scripting, I don’t feel like I need to explain as the concept is pretty much the same as the previous example scripts.


In the UI, the user has been assigned permissions to the filter.


To delete permissions then a DELETE request should be to URL format of:

https://<oac_essbase_instance>/rest/v1/applications/<appname>/databases/<dbname>/filters/<filtername>/permissions/<user_group_name>


Right, I am going to leave it there for this post. In the next part I will go through managing calculation scripts and running and monitoring jobs.

Saturday 7 July 2018

EPM Cloud - Data Integration comes to the simplified interface

A new piece of functionality was added in the EPM Cloud 18.07 update that I thought I should cover.

The cloud readiness monthly update document introduced the new feature in the following way:

"In addition to the standard Data Management interface, the 18.07 update provides a new, simplified interface to work with integrations in the Oracle’s Planning and Budgeting Cloud Service. The new simplified interface, called Data Integration is the front-end for all integration-related activities. Using this interface, the workflow to create mapping rules to translate and transform source data into required target formats, and to execute and manage the periodic data loading process is now streamlined and simplified."

So basically, Data Management is using the legacy standard interface and the end game for Oracle is to move everything into the simplified interface, so this is the first step in that process for Data Management.

What you will also notice is the name is changing to Data Integration, which means for now we have on-premise FDMEE, Data Management and Data Integration. Just to add to the confusion there is the unrelated Enterprise Data Management cloud service which is probably why Oracle want to change the name to Data Integration.

Before you get too excited about this release, it is early days and it is definitely nowhere near close to replacing Data Management.

Here is an excerpt from the documentation:

“Currently, Data Integrations is available as a preview version only for Planning and Oracle Enterprise Planning and Budgeting Cloud system administrators. The Data Integration component must be used in conjunction with Data Management to complete setup tasks such as registering source systems or target applications. Data Management is still fully supported and remains available as a menu selection on the Navigator menu.”

and:

“Data Integration does not replace the legacy Data Management, it is an additional feature that supports the same workflow with a subset of legacy features. Data Integration will continue to be enhanced until it has one hundred per cent parity with Data Management.”

What we can extract from the above statements is that it is a preview version and it is only available for standard and enterprise PBCS. It also does not contain all the functionality, so the missing pieces still need to be undertaken in Data Management.

The areas that still need to be setup in Data Management are:
  • Register Source System
  • Register Target Application
  • Period Mapping
  • Category Mapping
There are some important terminology changes with Data Integration compared to Data Management.


I am sure this is going to cause some added confusion while both Data Management and Data Integration exist, plus the fact there is on-premise FDMEE.

It is worth going through the list of functionality that is not supported or available in this release, for now it is quite a big list but obviously this will change over time.
  • Only supported in standard and enterprise PBCS
  • Only available for Service Administrators
  • Location attributes like currency, check entities and groups cannot be defined.
  • Logic groups are not available.
  • Fixed length files are not supported
  • Import expressions must be entered manually.
  • In the workbench the following is unavailable:
    • Import/Validate/Export/Check workflow processes
    • Validation errors
    • Displays only dimensions in the target application and columns cannot be added
    • Drill to source
    • View mappings
    • Source and Target view is only available.
    • Import/Export to Excel
  • In map members (data load mappings):
    • Rule Name is replaced with Processing order
    • Mappings cannot be assigned to a specific integration (data load rule)
    • Exporting is not available
    • Mapping scripts are unavailable
    • Multi-dimensional mappings are available but cannot be defined.
  • Column headers for multi-period loads are unavailable
  • Scheduling is unavailable.
With all of the current limitations, maybe you are starting to understand that it is definitely a preview version.

I think it is about time we take the new functionality for a test drive; my objective is to create a simple file-based data load.

Before I start out I think I should point out that the issues I hit might only be because it is extremely new, you may not have the same experience and if you do, then I am sure any bugs will be ironed out over time.

You can access Data Integration though the navigator by selecting Data Exchange.


Alternatively, you can select the Application cluster and then Data Exchange.


This takes you to the Data Integration homepage where all the available integrations are displayed, you can also access Data Maps.


What I did experience is that the sort option did not work correctly, especially by last executed. As well all the executions had a time of 12:00:00 AM.

It is nice though that all the integrations are available and can be executed or edited from this homepage.

Let us start out by creating a new integration by selecting the plus icon.


This opens an integration workflow where you are taken through the steps in defining an integration in a simplified way.


The general section allows you to define a name for integration, create a new location by typing in a name, or selecting an existing location.


After providing a name for the integration and location, you can select the source icon, this then allows you to select from all the available sources.


As I am creating a file-based load I selected File which then opens a file browser window.


This allows you to select, create, delete a folder and upload a file. It operates in the same way as the file browser in Data Management but I actually prefer this one in the simplified interface.

I opened the inbox and the browser displays the directory you are in.


Selecting inbox provides the option to create or delete folder.


I selected to create a folder and provided a name.


Now the source file can be uploaded.


The file will then be displayed in the browser where you have the option to download or delete it.


After clicking OK, the source details are populated in the integration configuration.


Below the filename there is a file options button, selecting this will open up a file import window, this is like the first part of creating an import format in Data Management.


This provides the option to select a different file and select the file type and delimiter. These were correctly automatically selected and I didn’t need to select so they must be determined from Data Integration reading the file.

The options for type and delimiter match to that in Data Management.

You also get to see a preview of the file which is a nice feature and can select the header to use as the column names.

The next screen is the file column mapping, if selected in the previous screen, the header in the file will populate the column names, it is possible to override the naming of columns by just entering new names.


Moving on to the target,


Selecting Target allows you to select the target for the integration which provides the same look and feel as when selecting a source.


I selected the planning application, this completes the general section of the workflow.


Select “Save and Continue”, this moves on to the map dimensions section which is equivalent to the source and target mappings when creating an import format.


Just like with an import format you can select a source dimension and map it to a target.


If you select the cog icon you have the following options.


This pretty much mirrors the ‘add’ functionality in the standard Data Management interface.


In the simplified interface you have to manually type any required expression. In the standard interface you have the option to select an expression type from a dropdown.


Now that the map dimensions have been configured, save and continue can be selected.

For me, it didn’t continue in the workflow and after saving I had to select “Map Members”.


The map members section of the workflow is the equivalent of creating data load mappings.

There is the option to import mappings but not export.


So let’s add some simple mappings.


In the source you have the option to define the type of mapping, this differs from the standard interface where there are tabs available for the different mapping types.


The concept of the mapping types and order of precedence is exactly the same as in Data Management, it wouldn’t make any sense if the logic had changed.

You will see “Multi Dimensional” is in the list but I don’t believe you can define the mappings in this release.


There is a column for processing order which is the replacement for rule name in Data Management, it operates in the same way and defines the order of precedence within a mapping type based on an alphanumerical value.

Now this is the point where I started to hit issues, even though I saved the mappings for each dimension when I returned to them they were blank.

Update the following issues were identified as a bug and have been fixed in the 18.12 release


When I got to the next part of the workflow to define the options I could not select a category or plan type.


The options section of the workflow is the equivalent to the options available in a data load rule in Data Management.

When I went back and edited the integration I received the following error message when I tried to save.


I went into Data Management and could see the import format had been created but there was no location.

I tried to create a location with the same name as the one in the simplified interface and was hit with another error.


I now have a location that does exist but I can’t see it in the standard interface and it doesn’t seem to save properly in the simplified interface.

I did have a couple of attempts at it and hit the same problem, maybe it was because I was trying to create a location from simplified interface instead of using an existing one. Once I get the opportunity I will look at it in more detail and update this post if anything changes.

The integrations I created do appear in the Data Integration homepage.


Though unless I am missing something, I don’t seem to be able to delete them in the simplified interface and they don’t appear in Data Management so I am stuck with them.

Instead of dwelling on this problem as I might have just been unlucky, I decided to create an integration in Data Management and then edit it in Data Integration.

The integration is available from the Data Integration homepage.


Selecting Actions provides the following options for the integration:


I selected “Map Members” and this time the mappings were available.


You will notice that a multi-dimensional mapping is displayed with a value of #SCRIPT, in this release even though it is an option it is not possible to fully define a multi-dimensional mapping in the simplified interface.

In the options section of the workflow, the category and plan type were now available and I could update them to other available ones if needed.

The filename and directory are also displayed which didn’t happen when I configured the integration through the simplified interface.


From the Data Integration homepage, the integration can be run.


This provides similar options to running a load rule in Data Management in the standard interface.


A confirmation window is displayed and provides the process ID.


As I mentioned earlier the time of the last execution is not correct and for some reason is displayed at 12:00:00 AM


It is possible to view the process details from the actions menu.


Process details in the simplified interface is basically a cut-down version of the one in Data Management, at least the execution times are correct and the process log can be downloaded.


The workbench is an extremely simplified version of the one in Data Management.


I only seemed to be able to change the period and selecting the icons did not do anything, so at the moment it looks like they are just there to indicate whether an import/validation/export/check has been run.

An annoyance for me was that if you filtered on an integration in the homepage – this is something you will do if you have lots of integrations – if I went to any section in Data Integration and returned to homepage the filter had been removed.

For instance, if I filtered down on the in integration I was interested in.


I then opened the workbench and closed it, the filter was gone even though I was still technically in Data Integration.


Another thing I noticed, even though I was active in Data Integration I would still get expiration warnings.


As stated earlier it is a long way off parity with Data Management and there are lots of improvements and functionality additions that need to happen. At the moment I could only see using it to run integrations or quickly access process details without having to open Data Management.

I am sure this will change over time and no doubt I will be back with further updates.