Sunday, 1 November 2015

Essbase 12c for BI: A glimpse into the future of Essbase for EPM

OBIEE 12c was released recently and like with OBIEE 11g it comes bundled with a suite of Essbase and EPM products, the noticeable highlight from an EPM perspective is that it is the first sight of Essbase 12c.

It is worth stressing that this Essbase for BI and not EPM, they are currently different code lines and by the time EPM 12c is released which could be 2017, Essbase 12c may have gone through many more changes.

Though what this release for BI does provide is a glimpse into a couple of big changes to the architecture behind Essbase, the two standout ones are:
  • The Essbase C agent is now replaced by a java agent.
  • The Essbase security file is no more and finally moves into the RDBMS
If you have been around Essbase for a long time then you will agree these are pretty fundamental changes.

I am going to try and cover these in a little more detail but as Essbase is not standalone and bundled with OBIEE then there are a few restrictions.

When you carry out a default install of OBIEE 12c the EPM/Essbase components are automatically installed.

At the point of configuration there is an option whether to include the Essbase components

The EPM components are all deployed into the same WebLogic managed server as OBI and these include
Essbase Agent.
  • Cube Deployment Services.
  • Workspace (EPM application with limited functionality).
  • Calculation Manager (EPM application also known as Allocations Manager).
  • Hyperion Provider Services (APS).
This means the Essbase C Agent is replaced with the Essbase Java Agent which runs as a web application.

It doesn’t look like it is currently possible to deploy the Essbase web application to its own managed server which is a bit of pain and also I am not a fan of trying to deploy too bunch into one managed server, this is similar in the EPM world to deploying web applications to a single managed server which I usually steer away from for a number of reasons.

There is an interesting statement in the documentation “Essbase Java Agent offers improved concurrency and networking capabilities over the classic C Agent”
Does this also mean the Java agent is going to offer more stability and less likely to freeze? Until it is possible to deploy Essbase to its own managed server away from the other BI components then I feel there are too many factors involved to be able to prove this.

Once OBI has been configured and you take a look at the deployments within the WebLogic admin console you will see the Essbase web application.

If you are wondering CDS stands for Cube Deployment Services which forms the Essbase Business Intelligence Wizard, this is a web based tool for building Essbase cubes.

It gives the feel of a very simplistic version of Essbase Studio and provides functionality to deploy ASO cubes from BI models within an RPD.

In OBIEE 11g it included EAS and Studio but these have now been dropped and replaced with CDS, if you are from the EPM world then I doubt you will be too impressed with this functionality but I suppose if you are looking to build straight up aggregation ASO cubes this simplifies the process.

I am not going to go into any more detail on CDS as maybe that is for another day or I am sure it will be covered by others.

Anyway back to Essbase, once the BI managed server has been started the Essbase Java Agent should be up and running.

One of the many reasons why the Essbase C agent has been replaced the Java agent is that in 12c there is no longer OPMN and the WebLogic framework pretty much runs the show.

I have never really believed in Essbase and OPMN unless you are clustering on a *nix system, ok you can get OPMN to restart Essbase if it crashes but the amount of drawbacks outweigh any benefit, it would have been nice to be given the option to deploy or not.

It is all change in 12c and we will have to see if the WebLogic framework handles it any better.

No longer will you see an Essbase process running and you will only the java process which is running all the deployments in the WebLogic managed server.

If you start an Essbase application then you will notice that not everything has moved to Java and C is still being used for the Essbase Server process, I suspect the long term goal would be for the server to also move to Java.

The essbase.log has is engraved into anybody that uses Essbase, in version 11 there was also the ODL version which made it confusing as there were now two Essbase logs, in 12c the essbase.log has gone and moved into the logs directory of the managed server.

jagent.log is the replacement.

The contents of the log are very different to what you have been used to with old Essbase log.

To verify the agent is up and running you should be looking for:

Oracle Essbase Jagent started on 9799 at Fri Oct 31 00:45:47 GMT 2015

OBI 12c uses a range of ports starting at 9500 by default and the Essbase agent is assigned 9799, though saying that the Essbase server range still starts at 32768.

What is a bit concerning is the log seems to be full of a repetitive error:

exception.during.capi.request[[java.lang.NoClassDefFoundError: Could not initialize class oracle.epm.jagent.logging.LoggerHelper

This type of error usually means that a class is not contained in the java classpath but the class in the error message is definitely in the path, it is a bit annoying as it could be important messages that are not being logged, I am going to put it down to being at the moment.

You can also check the status of Essbase in the WebLogic admin console or using WLST.

As Essbase is running as a web application a page should be returned for the agent over http(s)

Though technically speaking it is possible that the web application could be up and running but there is an issue with the agent which is preventing it from starting.

The application logs are a couple of directories below the jagent log

As the Essbase server has not changed you still have the old style log and the ODL log so much for progression :)

Some of the Essbase components are still installed under a products folder similar to the current EPM structure.

The Essbase ARBORPATH and location of the Essbase applications by default is /bidata/components/essbase, this is defined by an XML file which I will cover in more detail when I go through clustering.

As the Essbase Server (ESSSVR) has not changed there are no shocks around how the applications operate and store metadata/data.

The essbase configuration file (cfg) has not gone away but I suppose you were thinking that it was going to be in the above bin directory, don’t be silly you are used to that so it is time for a change :)

All the BI configuration files are situated under the fmvconfig directory and this is where the essbase.cfg sits, remember this is Essbase for BI and it doesn’t mean it will apply to EPM though I suspect it might.

The cfg file is relatively the same with the noticeable addition of the JAGENT_ID

PRIMODIAL_AGENT_ID just rolls off the tongue :)

There is a long list of cfg settings that are no longer relevant in 12c and these can be found in the documentation.

In Essbase for BI security is not controlled by Shared Services and falls into Enterprise manager using Oracle Platform Security Services (OPSS) so that is why OPSS is set as the authentication module, I am not going to go into as it is hideous and you can read all about it here

So how about starting and stopping Essbase.

As Essbase is deployed in the BI WebLogic managed server then once the managed server is started then Essbase should also be started, the managed server can also be controlled by the WebLogic node manager.

Alternatively if you only wanted to start/stop the Essbase then this could be done from the WebLogic console.

This could also be done using WLST using startApplication('ESSBASE') and stopApplication('ESSBASE')

As I mentioned earlier EAS has been removed from OBIEE 12c though technically you can still use EAS in an EPM environment to connect to Essbase 12c, no doubt doing this is not supported and you will encounter the following error if you try to implement any changes.

Error: 1051734 Operation not supported, authentication is managed by Fusion Middleware in this version of Essbase

Luckily there is still Maxl and even ESSCMD still lives on in 12c, both can be accessed from:

Connecting to essbase can still be achieved in the same way as previously in Maxl.

The documentation recommends going through the discovery URL which is a similar to the method currently available in EPM via APS.

This method of connecting becomes more relevant when using Essbase clustering as it will determine the active agent node and then connect to it.

Not all the same functionality is available in 12c due to the changes and these are covered in the documentation.

It is definitely not possible to shut down Essbase using Maxl.

Moving on to the next big change in 12c and one I am sure many will welcome is the security file finally being moved into the RDBMS.

The security file has always been troublesome and I have lost count the number of the times I have seen it being corrupted and having to resort to backup.

From Essbase 11.1.2 some of the elements from the security file like users and groups moved into the Shared Services database but unfortunately not all.

The documentation I feel is being a bit biased to Oracle database in its description.

“The Essbase RDBMS schema is the Oracle relational database that stores Essbase application and database metadata”

As the agent and server are now using different technologies then so are the methods of connecting to the RDBMS schema.

“The Agent connects to the Essbase RDBMS schema using EclipseLink, an open source mapping and persistence framework. The Essbase Server connects to the Essbase RDBMS schema using ODBC DataDirect drivers.”

The following set of tables form part of the Essbase RDBMS schema

Many of the table names are easy to relate to and it doesn’t take long to understand how are they are being populated, a few of the tables are still a bit of mystery to me at the moment.

Let us take a couple of examples and to start with create a new substitution variable.

You will not be surprised to realise the information is then stored in the table ESSBASE_SUBSTITUTION_VARIABLE, depending on the scope of the variable you may need to also bring in the ESSBASE_APPLICATION and ESSBASE_DATABASE tables if you want to view meaningful information.

Now for another example by creating a new filter.

Once again not difficult to figure out the driving tables this time are ESSBASE_FILTER and ESSBASE_ROW

A simple SQL query can return the filter information from the database.

Moving the contents into the RDBMS certainly opens up more possibilities around querying information than was previously available when it was held in the security file.

I am sure some of you are thinking that now it should be possible to directly populate the tables instead of using Maxl or an API, the problem there is that the data is cached in memory and if changes are made to the tables these will not be seen as active until restarting or the cache is somehow refreshed. It makes sense that it is held in memory as in theory it should provide faster access and less activity against the database.

I was going to cover the changes to clustering Essbase now that OPMN no longer exists but I think I will leave it for today and will cover it in the next part.

Monday, 5 October 2015

Planning REST API part 2

In the last post I went through the core functionality available using the REST API in Planning, the post mainly focused on what is available in both on-premise  planning and PBCS.

I thought I would go through the some of the other REST resources which are only currently available in PBCS and these being around Lifecycle management.

If you have not read the last blog then I recommend doing so before reading this as it may not make much sense otherwise.

It is fair to say that most of the REST API functionality is also contained within the EPM automate utility but there may be situations where you don’t want to install the utility or you want to include access to the REST resources in your current business processes, it may also be the case where you want to script a process yourself.

As explained in the last post the planning REST resources are all accessed through the following URL structure:


To access the Lifecycle management resources then the URL structure is


To be able to find out the current API version then use a GET method with /interop/rest

I am once again using a REST client browser addin for demo purposes.

As the REST APIs require basic authentication I added my PBCS username and password to the header, the addin automatically encodes the username and password using base64.

The response was not what I was excepting as I definitely supplied the correct credentials.

The current REST API documentation and examples suggest that the username and password should be sent through the authorization header using basic authentication.

This threw me for a while until I watched the http traffic from the EPM automate utility and noticed that the format for the username should be:


Please note that is not my email address before you attempt to send mail to it :)

Since my post on the REST API Oracle have been in touch with me and have advised that the documentation will be updated in the future so you will not suffer the same pain :)

After updating the header to the correct format the results were much more pleasing.

From the JSON response you can see that the current API version is

To return information on the services that are available then the following resource can be accessed:


I am going to concentrate on the application snapshot resources as this is where the main LCM functionality resides and this includes:
  • Get Information about All Application Snapshots
  • Get Information about a Specific Application Snapshot
  • Upload/Download files or LCM snapshots to PBCS file repository.
  • List/Delete files in the PBCS file repository.
  • Execute an LCM import/export
I am going to start off with uploading a metadata file to the PBCS repository so that a job can then be run to import the metadata.

To upload a file there is a post resource available using the URL format:


{applicationSnapshotName} is the name of the file to be uploaded

The query parameters are:
chunkSize = Size of chunk being sent in bytes
isFirst = If this is the first chunk being sent set to true
isLast = If this is the last chunk being sent set to true.

The document does state the following:

The client calls the Upload REST API multiple times based on the size of file to be uploaded.
The client breaks the existing stream into a number of chunks, depending on the logic, so that each chunk size is not greater than 50 * 1024 * 1024 bytes.

I believe the size is based on the example code in the documentation which breaks the upload into 52mb chunks and is not the maximum size that can be sent in chunks.

It is all sounds more complex than it actually is, let me take an example using PowerShell to upload a csv file, I am using PowerShell like in my previous post because it does not require much code which is good for demo purposes and most should have easy access to it for testing.

Using the Invoke-RestMethod cmdlet makes life simple, pass in the URL, the encoded basis authentication header (using domain.username for the username), and the file to be uploaded.

The content type needs to be set to “application/octet-stream” otherwise it will not work.

The response LCM status codes are:
0 = success
-1 = in progress
Any other code is a failure.

Checking the inbox/outbox explorer confirms the file has been successfully uploaded

The only difference to download a file is that it is a GET method and there are no query parameters required like an upload, the format is:


You can also use the list files resource to return information about the files/folders in the PBCS repository.

This requires a GET method and the following resource URL:


There are two types, LCM which is usually an LCM snapshot (inbox/outbox/data are FDMEE related) and EXTERNAL which indicates the file is not LCM related.

The last modified time (which will need converting) and size is only relevant for EXTERNAL file types.

Deleting files is simple as well as there is a resource available using the DELETE method and following format:


Right back to my original task, first was to upload a metadata file which is done and next to execute a job to import the metadata.

I have a job already defined in the console which will import the csv file from the inbox into the product dimension.

I covered running jobs in the last post and it is the exactly the same concept to run an import metadata job.

Once the job ID has been stored you can check the status and also additional details about the job.

To clear any doubts the job console can be checked within the planning application.

Once the metadata has been loaded successfully you could execute another job to refresh planning by just changing the job type and job name.

Right let us move on to LCM export/import functionality which will repeat an export/import of an already defined snapshot

There are two post method resources available which use the following format:

LCM Export

LCM Import

So basically all you need to do is change the query parameter type depending on whether you are performing an export or an import.

To show how to use this functionality I am going to perform a repeat export of an LCM snapshot called LCM_PLANAPP

In all of my examples up to now I have put the full URL together depending on what resource I am executing which is fine but there is another to generate this by returning information about the snapshot.

I did use the resource earlier but did not show everything that can be returned, just to recap the resource to use is:

GET /interop/rest/{api_version}/applicationsnapshots/{applicationSnapshotName}

Depending on process you want to carry out on the snapshot you can return the full URL to use and the method.

I can put this into practice when running the LCM export.

The first section of the script returns all the information about the application snapshot.

The URL to run an export is then stored and used in the next REST call.

The LCM export status is displayed which is -1 so it is still processing.

The script waits for 10 seconds and then checks the status again which this time returns 0 which means it has completed.

To run an import all that would be required would to change “export” to “import” on line 18.

That pretty much covers most of the LCM REST functionality so you should now be able to upload/download/list/delete files, run LCM export/imports and combine with Planning jobs.

There are some REST resources available for FDMEE but I am going to cover them in a future post where I will look at possible options when using web services and FDMEE.

Monday, 28 September 2015

Planning REST API

If you have had any involvement with Oracle PBCS then you will know there is a utility called EPM automate that allows to remotely connect to a PBCS instance and run a whole host of tasks such as run rules, refresh the application, import and export metadata.

The utility is built on top of a set of REST (stands for Representational State Transfer) APIs using Java and is run from command line, the REST APIs can also be accessed outside of the utility using a variety of ways such as a web browsers or pretty much any programming language.

REST is not a new technology and there is lots of information out there on the internet if you want to read up and gain some knowledge.

Here is a brief description from the documentation:

REST describes any simple interface that transmits data over a standardized interface (such as HTTP) without an additional messaging layer, such as SOAP. 

REST provides a set of design rules for creating stateless services that are viewed as resources, or sources of specific information, and can be identified by their unique URIs. 

RESTful web services are services that are built according to REST principles and, as such, are designed to work well on the Web. Typically, RESTful web services are built on the HTTP protocol and implement operations that map to the common HTTP methods, such as GET, POST, PUT, and DELETE to retrieve, create, update, and delete resources, respectively.

As the majority of planning customers will either be on-premise or using alternative cloud provider I was interested to see if that REST APIs have actually made it to and if so what is available.

Before I start I will give the usual disclaimer and point out that the REST API is currently only documented for PBCS so I am not sure what you are about to read is supported yet which means  this is all for educational purposes :)

The URL structure to access planning REST resources is:


To test whether the API is there in there is a REST resource which uses a GET method and will return information about which REST API versions are available and supported.

The resource is accessed through /HyperionPlanning/rest and should return a response in JSON format, it may be worth having a read up on JSON if it means nothing to you.

So let’s just try and access the resource in a standard browser.

Nice, a response has been returned so the API is available in some form in

The JSON response may not be so easy to read but this is not a problem as there are a variety of REST client addons available for browsers.

For the examples I am going to use the RESTClient addon for Firefox which is extremely simple to use.

Enter the URL for the resource and the JSON response will be generated in a readable format.

The response parameters that are returned are defined as:

Deciphering the response indicates for the latest active version the URL to use for the resources will be:


If you are using PBCS it is currently


Let us look at another available resource to return a list of available applications.

This time the response is an error message and this is because I am trying to access a resource that requires authentication.

The majority of resources require HTTP basic authentication credentials to be supplied in the header which is no problem using the RESTClient

This will add the authentication credentials to header for all requests.

After adding the authentication header the response now returns the applications.

I think the response is clear enough to understand which applications are available, the dpEnabled parameter indicates whether the application supports decision packages which are part of the different planning modules such as public sector.

If you are using PBCS then the authentication is a little different which the documentation does not specify, maybe I will post an update soon.

Anyway, I thought I would show how to simple it is write a bit of code to work with the REST APIs, for my examples I have chosen PowerShell just because nowadays it is pretty much available on every client Windows machine and there is an IDE available so it will be easy to test, you can work with the REST APIs in most programming languages so pick the one that you are most comfortable with.

The first part of the script handles creating an encoded password for the HTTP header, once you have an encoded password this could be read from a file so no need to keep generating it.

Starting in Powershell V3 there is a included cmdlet called Invoke-RestMethod which sends http(s) requests to RESTful services and is extremely simple to use.

So with a small amount of code the list of available planning applications can displayed or stored.

If you are going to use the Planning REST API the main activity will be around the executing of Jobs.

The supported job types:
  • RULES 
There is a resource available to retrieve all the jobs that are available in the application using the following format:


This will return each job type and job name that is available in the requested application

If you only want to return certain job types then you can use a query parameter in JSON format:


Running a job once again has its own resource which this time uses the POST method:


In the body of the request you post the Job Type and the Job Name

This is where planning and PBCS seem to differ as in the PBCS version the body of the request accepts JSON but this doesn't seem to work in

I will go through examples of using this resource starting with refreshing the application.

In PBCS you can use the format of:


In order to be able to execute jobs in I used the following:


Once submitted this generated the following response:

The response parameters that are returned are defined as:

As you can see the status and descriptive status indicate the job is still running, to check the status of a job then there is another GET resource available using the following format:


This time the response confirms the refresh job was successfully completed.

If you look at the job console in planning you will also see the job information.

To achieve this using PowerShell is once again pretty simple.

The only difference from the earlier PowerShell example is this time it includes as body to the request.

As the Job ID has been stored it is easy to check the progress of the job.

To run a rule then the request is pretty much the same with only the job type and job name changing.

This is fine if the rules does not contain any runtime prompts, I couldn’t get it to work in when it does and I spent far long trying to get it to work.

I believe the format should be:

No matter what I tried I kept getting the following response:

The logs contained:

[APP: PLANNING#] [SRC_CLASS:] [SRC_METHOD: executeJob] Failed to run job: RestRule[[
at com.hyperion.planning.calcmgr.cmdlnlauncher.HspCalcMgrCmdLineLauncher.launchRule(

I am not sure whether it actually works in but if I do find out it does then I will update this post.

It certainly works in PBCS but like I said earlier that accepts JSON in the following format:

    "JobType": "JobType",
    "jobName": "JobName",
    "parameters": {
        "VariableName": "VariableValue"

I did find some interesting hidden resources that are not documented.

Please note the following is not available in PBCS and could be removed from on-premise in the future.

The first one returns the POV, Columns, Rows and data in a planning form.

The resource uses the GET method and can be accessed through:


Let us take this form as an example.

Submit the request.

This generates as response of:

Pretty cool the form and data have been returned in JSON, again this can be easily done in a scripting language.

There is another resource which is even more fascinating that allows MDX to be run against planning and produces a similar response to the last example.

The resource uses the POST method and can be accessed through:


In the body of the request the MDX can be passed.

The above MDX query should bring back data for a Smart List member and a text member.

Interesting the text member data is actually returned as the text string and not just a value, the Smart List data only returns the numeric value though.

To show how easy it is retrieve the data using a bit of scripting.

It certainly opens up lots of opportunities when you have easy access to the data.

There is also an additional query parameter that can be added which will also return attributes of the data cells in the response.

I think I am going to leave it there for today as putting this blog post together has certainly been a mammoth task and has taken up a lot of my time.

I am sure there will be future posts on this subject but in the meantime if you would like to know anything more just get in touch.