Sunday, 24 March 2019

Hybrid FDMEE and upcoming Oracle Cloud changes

If you are not already aware, there are security protocol changes being made in the 19.05 Cloud release that will affect hybrid FDMEE implementations.

An excerpt from the Cloud Readiness document:

“Starting with the May 2019 update (Release 19.05) to EPM Services, Oracle will support only Transport Layer Security protocol version 1.2 (TLS 1.2) to communicate with EPM Cloud. To ensure the highest level of authentication and data encryption security, TLS 1.0 and TLS 1.1 will no longer be supported after your environments are updated on May 3, 2019.”

Now I have already read an article on this topic and Oracle support documents which outline how to be compliant with the TLS changes, but in my opinion the guidance has been made overcomplicated and confusing.

In this post I am going to provide my take on what is required to be ready for when the security changes are implemented. The objective is not to overly complicate matters, nor to advise to apply every critical patch update across your whole EPM estate. The idea is that you already have a functioning hybrid FDMEE implementation and you need to know what requires updating which will have minimal impact to your environment.

Please note this is just my opinion, so feel free to disagree and challenge my view.

Basically, FDMEE integrations with EPM Cloud are achieved using the REST API, communication between FDMEE and the Cloud is only from the FDMEE Java web application. This gives a clue to where to focus first and that is with Java. If there have been no updates to the version of Java deployed with FDMEE 11.1.2.4 then it will be running Java 6 update 35. This version of Java is so old it does not support TLS 1.2 so when the upcoming changes are made, FDMEE integrations with Oracle Cloud will fail.

The Oracle Java roadmap highlights just how far behind 11.1.2.4 is. If you put it into perspective Java 12 has recently been released and 11.1.2.4 is running on Java 6 which extended support ended in December 2018 so there will no further patch updates.

If you want to read about this in more detail, then take a look at the Oracle support document – “Oracle Java SE 6 and JRockit End of Support (Doc ID 2244851.1)


So, what are the options when it comes to updating the version of Java to support TLS 1.2? Well, 11.1.2.4 only supports Java 6 and 7 so you can forget about anything higher.

If you have already updated you EPM environment to Java 6 update 121+ (don’t hold me on that version though) or to Java 7 then you should be ready for TLS 1.2, though to be sure you should really test to confirm.

Before making any changes, I double checked the supported protocols against the EPM Cloud instance with the help of an SSL labs test.


Only TLS 1.2 is enabled which is what I wanted to see. If you have time and would like to test out on an Oracle Cloud instance, then raise an SR and they will provide temporary access.

In FDMEE I added a new target Cloud application.


Instead of the application selection window being displayed, an SSL error was generated.


With TLS debugging enabled, the logs confirmed that the issue was related to communication being initiated with the TLS 1.0 protocol.

[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)', READ: TLSv1 Alert, length = 2
[ACTIVE] ExecuteThread: '6' for queue: 'weblogic.kernel.Default (self-tuning)', RECV TLSv1 ALERT:  fatal, handshake_failure

I will go through what I believe is a simple method for updating Java to a version that should support TLS 1.2, it is not the one which Oracle sometimes advises and involves finding and replacing Java versions in files.

I am going to start out with Java 7 as that makes more sense to upgrade to. My example will be on Windows, but the concept should be similar for other operating systems.

If you look in the middleware home directory, you will likely see the following Java versions.


It is only the JDK which will require updating to Java 7, this is because JRockit does not exist after 6 so we will configure to only use the JDK.

If you are looking to download Java versions from Oracle support, then I recommend the document – “All Java SE Downloads on MOS (Doc ID 1439822.1)”.

I am going to be updating Java to the following version.


The patch is simple to install as it is just an executable.


I set the install path to be the middleware home directory, so it is in the same location as the existing Java versions.


There are now three Java versions.


The Java 6 directories were renamed.


Now the next step is to use “mklink” to create a symbolic link to the new Java location. The old Java 6 directories will be set as the link name. This means anything referring to jrockit_160_37 and jdk160_35 will be directed to the new Java installation, so no need to mess about trying to find and replace Java versions in files.  Also, if you update the version of Java in the future, it is as simple as recreating the symbolic links.

The syntax for “mklink” is:

mklink /d <link> <target>

/d - Creates a directory symbolic link.
<Link> -Specifies the name of the symbolic link that is being created.
<Target> - Specifies the path that the new symbolic link refers to.

It is as simple as running the following for the JDK.

mklink /d E:\Oracle\Middleware\jdk160_35 E:\Oracle\Middleware\jdk170_211


Then for JRockit:

mklink /d E:\Oracle\Middleware\jrockit_160_37 E:\Oracle\Middleware\jdk170_211


In the middleware home directory there are now two symbolic links created.


The properties show that the target is the Java 7 directory.


For example, if I go to the Java 6 JDK directory it might look like I am in that directory, but it is pointing to the Java 7 directory.


This makes life much easier as there is no need to go searching through files to repoint to the new Java installation.

Please note if your environment is across multiple servers you will need to update Java in each of them.

There is one more change that needs to be made before I can test. Each of the Java web applications will be accessing a Java file in a JRockit directory which will need pointing to the equivalent JDK directory.

To make this change, the Windows registry editor is required as well as the key for each of the web applications, for example:


There will be a “JavaDll” string pointing to a JRockit directory.


This can be updated to the equivalent JDK directory, notice that this is kept as the symbolic link location so no modifications are required if Java is upgraded again.


This change will need to be made in the registry for each of the Java web applications.

All done and ready to test once the services have started.

In FDMEE a new Cloud application is added.


This time the result is much better, and no error is displayed.


With TLS debugging enabled, the log this time confirms the communication is using the TLS 1.2 protocol.

[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)', READ: TLSv1.2 Handshake, length = 87

To be sure everything is working as expected I tested out refreshing metadata.


This was successful.


Next, refreshing members.


No problems with this.


Now for a data load from on-premise FDMEE to EPM Cloud.


I am testing out with a numeric load first which will create a file that is in the format for a direct load to Essbase.


The file will be uploaded using the REST API and then again with the REST API a job will be run to load the data file.



I tested out the same data load but this time using the all data types method. This will create a file in the Outline Load Utility (OLU) format and the utility will be called to load the file through the planning layer.


Once again, no problems.


Next, on to extracting data from EPM Cloud and loading to on-premise FDMEE.


All good. I did also test a custom Jython script which makes REST API calls to EPM Cloud and there were no issues.

So all I have done is update the version of Java and everything seems to be working well.

If you are adamant in staying on Java 6 then it is a similar process to update. The only difference is that JRockit can be also be updated.

The final Java 6 patch versions are:

Patch 28414647: Oracle JDK 6 Update 211
Patch 28414796: JDK160_211 Oracle JRockit R28.3.20

If you follow the same process I have previously been through, then you should end up with something like:


Create the symbolic links using “mklink”


You should end up with the following:


There is no need to update the Windows registry as JRockit is still being used.

I carried out the same tests in FDMEE and did not come across any issues.


Oracle do suggest patching WebLogic but from my testing it didn’t look like it was required for TLS 1.2. I can only assume it is for enhanced security reasons.

If you look at the web application logs, you will notice how old the version of WebLogic is that is deployed by default with 11.1.2.4

<Version: WebLogic Server 10.3.6.0  Tue Nov 15 08:52:36 PST 2011 1441050 >

If you want to patch WebLogic, the process is pretty simple and there are lots of examples on the internet. Patches are applied using the BEA Smart Update (bsu) command line utility.

The patch I am going to apply is:

Patch 28710912: SU Patch 7HKN: WLS PATCH SET UPDATE 10.3.6.0.190115

This was downloaded and extracted.

Then the patch can be applied with the BSU utility through command line. I will leave it to you to find out what each of the parameters are :)


If you have Java web applications deployed across multiple servers then the patch should be applied to each of these.

Once you start up a web application the log should contain the patch information.

<Version: WebLogic Server 10.3.6.0.190115 PSU Patch for BUG28710912 Tue Nov 13 06:00:38 PST 2018
WebLogic Server 10.3.6.0  Tue Nov 15 08:52:36 PST 2011 1441050 >

So there you go, updating Java might be enough to get you ready for the changes. Let me know if you see different results.

Sunday, 3 February 2019

EPM Cloud - New REST APIs to manage and optimise application performance

The 19.02 EPM Cloud release brings new REST API resources to set the administration mode of an application and assist with possible optimisation of an application.

The following is now possible through the REST API:
  • Set administration mode for an application.
  • Perform a full restructure of a block storage cube to eliminate or reduce fragmentation.
  • Compact the outline file of an aggregate storage cube.
  • Merge incremental data slices of an aggregate storage cube to improve performance.
  • Optimise aggregation to improve the performance of aggregate storage cubes.
Currently these are only available with the REST API, but I would expect the functionality to be added to EPM Automate in the near future.

Up to now, the functionality has only been available either by manually applying or creating a job which can be rescheduled or run straight away. With the REST API you don’t have to be confined to these restrictions.

In the post I am going to through each of the new REST resources, providing an example with a REST client and a simple script.

Let us start off with the one I feel will be the most welcomed - the ability to change the administration mode of an application. I have blogged about this a couple of times in the past, the first was when the only way to automate the process was to either run a refresh database and change the administration mode or change the mode through migration files. The second post looked at the schedule job feature which was introduced in 18.05

It is possible to change an application between administrators only and all users through the UI and this is under application settings.


It is also possible to create a schedule job with the “Administration Mode” option.


This can be run immediately or scheduled and allows the application to be enabled for administrator or all users.


Moving on to the REST API, all the new functionality is available through the jobs resource which has the URL format of:

https://<epm_cloud_instance>/HyperionPlanning/rest/v3/applications/<application_name>/jobs

A POST method is required, and the body of the post must include the job details in JSON format.

The JSON should include:

jobType which will always be “Administration Mode”

jobName just provides a description of the job and can be named appropriately.

loginLevel is a parameter that defines which mode to place the application in, possible values are either “Administrators” or “All Users”.

An example to put the application in application mode using a REST client is as follows:


The response from the REST request includes the status, job ID, details and a URL to access the job information.


The application mode change should be instant so there will no need to keep checking the status until it is complete. The job will be available in the simplified interface and contain the value that was defined in “jobName” parameter.


Under application settings the application is now only enabled for administrators.


If non-administrators try and log into the application, they will be greeted with:


A simple change to the “loginLevel” parameter will update the application mode back to all users, the REST request would be:


The response is similar except for the “details” parameter description.


Once again, the job activity will be available in the UI.


The application will be enabled for all users in the application settings.


The body of the REST request only allows two values for “loginLevel”, if a different value is specified then the following error will be returned in the response:


The reality is, the process of updating the administration mode of an application would need to be automated, this can be achieved using most scripting languages.

It can come down to personal preference or company policies to which language you select. The following is an example using PowerShell which does not take much effort to put together.


As the new REST functionality is all based around the jobs resource the script can be easily reused.

To update the application mode to all users, the only changes would be to the “jobName” and “loginLevel” variables.


After making the changes, running the script will enable the application for all users.


On to the next new welcomed feature and that is the ability to restructure a cube using the REST API.

It is possible to restructure a cube manually through database properties in Calculation Manager.


This can be automated by selecting the “Restructure Cube” option when creating a new job schedule.


The cube which requires restructuring can be selected in the schedule job options.


To achieve this through the REST API it is the same job resource URL and POST method.

This time the “jobType” will be “Restructure Cube”, the “jobName” can be named appropriately and “cubeName” is required to define which cube to restructure.


As a restructure is not instant, the response will no doubt inform that the process is running.

The status values are -1 = in progress; 0 = success; 1 = error; 2 = cancel pending; 3 = cancelled; 4 = invalid parameter


The response includes a URL to the job status, which can then be accessed to check whether the process has been completed.


The simplified interface contains the job activity.


To automate the process the same script can be reused, it only requires a few modifications for the job information which is included in the body of the REST request.


In previous blogs I have gone into more detail around repeatedly checking the status of the job, but in simple terms the job status URL which is returned in the REST response can be stored, with a GET request the job status can then be checked.


To confirm a restructure had taken place, before running the script I created some empty blocks in the cube.


After running the script, the empty blocks had been removed which confirms a restructure had taken place.


Moving on to the three remaining optimisation features, all of which are ASO related.

The first of these is the ability to compact the outline of an ASO cube.  It is possible to compact the outline through database properties in Calculation Manager.


There is also a “Compact Outline” option in scheduled jobs in the simplified interface.


This provides a brief description of the functionality and allows you to define which cube to compact the outline.


If you don’t want to use the scheduling functionality in the UI, it is now possible to use the REST API.

As with all the new functionality, the same jobs URL is used and a similar body in the REST request.

The “jobType” will be “Compact Cube”, not sure why it wasn’t named “Compact Outline”. The “cubeName” parameter defines which cube to compact the outline.


The response includes the same parameters as the previous examples.


As compacting the outline is not instant, the job status will need to be repeatedly checked until it completes.


As you would expect the activity is available in the UI.


Again, the same script can be reused with a few minor alterations for the body of the REST request.


Next onto to merging data slices in an ASO cube. Data slices are automated created with actions like Smart View submits, data maps, smart pushes and data loads. Over time, the number of slices will increase, which can adversely affect performance. The slices can be either be merged into a single incremental slice or into the main cube, keeping in mind it does take longer to merge into the main cube and the time it takes relates to the size of the cube. When merging, zero values can be kept or removed.

Merging slices can be manually carried out through database properties in Calculation Manager.


To automate the process, there is a “Merge Data Slices” option when creating a new scheduled job in the simplified interface.


The job allows you to define which ASO database you wish to merge slices and whether to merge into the main database or a single incremental slice, zero values can be kept or removed.


If you don’t want to be constrained to scheduling a job, then this is where the REST API can help.

The REST API resource is the same jobs URL.

The “jobType” should be “Merge Data Slices”. The “cubeName” defines which cube to merge the slices.

There is a parameter called “mergeSliceType” which defines whether to merge into the main cube or incremental slice. The values are either “allIncrementalSlicesInMain” or “allIncrementalSlicesInOneIncremental”.

The parameter to define whether to keep zeros is “keepZeroCells” which can be “true” or “false”.


I went for the option of merging all slices in the main cube and to remove zero values. Before running the REST request, I checked the number of incremental slices in Calculation Manager.


Now this is where I hit a problem. The job type of “Merge Data Slices” was not found, which means that the job type is either incorrect or the functionality has not yet been introduced.


Once I have more information I will update this post.

What should have happened is all the incremental slices will be merged into the main cube.


On to the final REST resource and that is based around ASO aggregations to optimise performance.

The documentation states:

“Optimize Aggregation can significantly improve the performance of aggregate storage cubes. You must first enable query tracking to capture tracking statistics on the aggregate storage cube. Then, after you enable query tracking, you must allow sufficient time to collect user data-retrieval patterns before you execute the aggregation process based on query data. The execute aggregation process deletes existing aggregated views and generates optimized views based on the captured data.

This job has two actions: Enable query tracking and Execute aggregation process.”

This can be manually achieved through Calculation Manager and in database properties there are the options to start query tracking and execute an aggregation.


No surprise that there is an option available in scheduled jobs in the simplified interface.


The job allows you to select the ASO cube and whether to enable query tracking or execute an aggregation process.


If the option to execute an aggregation process is selected, there are additional parameters to base the aggregation on.


The option to base the aggregation on query data should only be selected if query tracking is enabled.

The same functionality is now available through the REST API.

This is possible through the jobs resource URL with a POST method.

The “jobType” is “Optimize Aggregation”.

The parameters include the cube name, "type" with possible values of “enableQueryTracking” or “executeAggregationProcess”.

If type has been defined as “executeAggregationProcess” the following parameters can be included:

useQueryData” which can be set as “true” or “false”, it can only be true if query tracking has been enabled.

includeAlternateRollups” which can be defined as “enable” or “disable”

growthSizeRatio” which is optional and specifies the maximum size the aggregated database can grow, which is based on the ratio and the input level data size.


The above REST request enables query tracking and the response includes the job information.


The job status URL can then be accessed to keep checking whether the process has completed.


This information will also be available in jobs in the simplified interface.


After query tracking has been enabled and sufficient retrievals on the data has taken placed, the cube can be aggregated based on the query tracking using the following REST request.


Once completed, the response from the job status URL will be something along the lines of:


In the following example, query tracking has not been enabled, an aggregation is to be executed and the maximum size will be 10% greater than the input level data.


The job status URL based on the job ID can then be requested until the job completes.


Automating this process would only require a slight modification to the example scripts I provided earlier in this blog.

I am going to leave it there for this post as I think that covers all the new REST API functionality.