In the first part, I went through an example of extracting forecast data from PBCS using Data Management, downloading the data file and then loading this to an OAC Essbase database. All the steps in the example were manual, so in this post I am going to add some automation using REST APIs and scripting.
I would recommend reading through the first post if you have not already, as I will be referring to it and this post will probably not make much sense unless you have read it.
As always, I am going to stress this is not the only way to go about automating the process and is only to provide an idea as to what can be achieved.
I will provide examples of the REST API using a free REST client and the scripting will be mainly with PowerShell, though you can achieve the same results with pretty much any scripting language. There will also be a little bit of Groovy thrown into the mix for those that are running a user managed version of OAC vs autonomous.
A summary of the process that will be automated is:
Before I start out, it is worth pointing out that I going to be using the same forecast year sub var, Data Management and Essbase data load rule that I covered in the last post.
For the first part of the process, I want to extract the Essbase forecast sub var. This has been created at application level.
To extract using the REST API, a GET request is made to the following URL format:
https://<oac_instance>/essbase/rest/v1/applications/<app_name>/variables/<sub_var_name>
In my case this would equate to:
The JSON response includes the name and value of the sub var.
For Data Management I need to convert this to the start and end period.
This is where a script comes into play and can automate the process:
Now that the variable has been extracted and transformed, the Data Management load rule can be executed.
The idea is to execute the rule with the following values:
I have covered this in the past but to run a rule using the REST API, a POST method is required, and the body of the request should include the above values in JSON format.
The response includes the job ID (process ID), current job status and a URL to keep checking the status.
The job status can then be checked until it completes.
Time to convert this into a script which will execute the rule and store the response.
The rule has been executed and the response stored, now it is time to keep checking the status until it completes.
In the Data Management target options of the rule, a static filename has been set.
This means the file is available for download using the defined filename and from a location accessible using the REST API.
A GET request can be made to the following URL format which includes the filename.
This is where my example splits: if you want to use a Groovy script and download directly to the OAC instance, this could be an option available to user managed OAC instances.
Alternatively, for an autonomous instance which I will cover first, you can download the file to a staging location, an example to do this could be:
The file will be available to load to OAC.
There are a couple of options available, you could upload the file to the OAC instance and then run a data load rule or use the data load stream option.
The streaming option allows you to run an Essbase data load rule but stream in the data, removing the requirement to upload the file first.
To stream data using the REST API you must use to a POST method to indicate you want to start a stream data load. The body of the post should include the Essbase load rule name.
The response will include a URL to post the data to.
The data can then be streamed using the returned URL.
The response will include URLs to either stream more data or end the data load rule.
To end the data load, a DELETE method is required to the same URL.
If there were no errors, a successful message should be returned.
If I update the data to include an invalid member and run the data load again.
The response will indicate there were records rejected and the filename containing the errors.
This file will be available in the Essbase database directory.
An example of the error file is:
This file could be downloaded using the REST API if required.
An example of automating the stream data load method using a script could be:
I did have some fun trying to get the script to work as it needs to keep a web session active between the start and end of the streaming. I had to use “Invoke-WebRequest” where I generated a session variable and then used this in subsequent REST calls.
If you are interested in what is happening behind the scenes with the data load streaming method, here is an excerpt from the Essbase application log.
[DBNAME: GL] Received Command [StreamDataload] from user [john.goodwin]
[DBNAME: GL] Reading Rules From Rule Object For Database [GL]
[DBNAME: GL] Parallel dataload enabled: [2] block prepare threads, [1] block write threads.
[DBNAME: GL] Data Load Updated [21739] cells
[DBNAME: GL] [EXEC_TIME: 0.82] Data load completed successfully
Clear Active on User [john.goodwin] Instance [1]
If you don’t want to go down the streaming route, you could upload the file to the Essbase database directory using the REST API.
A PUT method is required to the following URL format which includes the name of the file and if you want to overwrite if it already exists:
This can simply be converted into a script.
After uploading you can then run a load job which I will cover shortly.
Going back to the Groovy option, which if available could be used to carry out all steps of the process to move data between EPM Cloud and OAC. As an example, I am going to use it for downloading the data file from EPM Cloud directly to the Essbase database directory in OAC.
In the Groovy script, variables are defined such as the EPM Cloud URL for downloading files, the data filename, location in OAC to download the file to. The user credentials are encrypted to create the basic authentication header for the REST call.
A method is then called to make the REST request and download the file.
The script should be saved with an extension of “gsh” and then uploaded to OAC.
The script can be run from the jobs in the UI.
The application/database and script can then be selected and the Groovy will then be run.
One of the disadvantages at the moment with Groovy in OAC is that parameters can not yet be passed into the script when running as a job.
After running the job, an output file will be available that contains the output of the “println” method in the script.
As the script was successful, the output file contains the following:
As this blog is all about automation we can run the Groovy script with the REST API.
A POST method is required to the jobs URL, the Groovy job type and script to run is included in JSON format in the body of the post.
The response includes detailed information about the job and a URL to keep checking the job status.
Once again this can be simply converted into a script to automate the process.
With a GET method, the status of the job can be checked with the jobs URL that contains the job ID.
A script can automatically keep checking the job status, this is a similar concept to the earlier example when checking the status of a Data Management job.
The file will have been downloaded directly from EPM Cloud to the Essbase database directory in OAC.
Finally, on to running the Essbase load rule to load the data contained in the file.
Using the REST API, it is the same call to the jobs URL. The only difference is the job type is “dataload” and parameters define the load rule and the data file.
The information returned in the response is similar to running any type of job.
The status of the job can be checked until it completes.
The beauty of running a data load job compared to streaming data is that the response includes the number of records that were processed and rejected.
This part of the process does not take much effort to convert into a script.
Now that the full process has been automated and run, the data from EPM cloud is available in OAC Essbase.
With scripting you can also automate the opposite process of extracting data from Essbase and then loading to EPM Cloud.
Once a script is in place it can be reused across different data flows by just changing variables.
If you are interested in understanding in more detail about how automation can help, please feel free to get in touch.
I would recommend reading through the first post if you have not already, as I will be referring to it and this post will probably not make much sense unless you have read it.
As always, I am going to stress this is not the only way to go about automating the process and is only to provide an idea as to what can be achieved.
I will provide examples of the REST API using a free REST client and the scripting will be mainly with PowerShell, though you can achieve the same results with pretty much any scripting language. There will also be a little bit of Groovy thrown into the mix for those that are running a user managed version of OAC vs autonomous.
A summary of the process that will be automated is:
- Extract Forecast year substitution variable from OAC Essbase.
- Transform variable into the start/end period for an EPM Cloud Data Management data load rule.
- Run a Data Management data load rule to extract planning forecast data, map and then generate a file.
- Download data file from Data Management (Groovy example, downloads directly from DM to OAC).
- Run an Essbase Load rule to load data from file.
Before I start out, it is worth pointing out that I going to be using the same forecast year sub var, Data Management and Essbase data load rule that I covered in the last post.
For the first part of the process, I want to extract the Essbase forecast sub var. This has been created at application level.
To extract using the REST API, a GET request is made to the following URL format:
https://<oac_instance>/essbase/rest/v1/applications/<app_name>/variables/<sub_var_name>
In my case this would equate to:
The JSON response includes the name and value of the sub var.
For Data Management I need to convert this to the start and end period.
This is where a script comes into play and can automate the process:
Now that the variable has been extracted and transformed, the Data Management load rule can be executed.
The idea is to execute the rule with the following values:
I have covered this in the past but to run a rule using the REST API, a POST method is required, and the body of the request should include the above values in JSON format.
The response includes the job ID (process ID), current job status and a URL to keep checking the status.
The job status can then be checked until it completes.
Time to convert this into a script which will execute the rule and store the response.
The rule has been executed and the response stored, now it is time to keep checking the status until it completes.
In the Data Management target options of the rule, a static filename has been set.
This means the file is available for download using the defined filename and from a location accessible using the REST API.
A GET request can be made to the following URL format which includes the filename.
This is where my example splits: if you want to use a Groovy script and download directly to the OAC instance, this could be an option available to user managed OAC instances.
Alternatively, for an autonomous instance which I will cover first, you can download the file to a staging location, an example to do this could be:
The file will be available to load to OAC.
There are a couple of options available, you could upload the file to the OAC instance and then run a data load rule or use the data load stream option.
The streaming option allows you to run an Essbase data load rule but stream in the data, removing the requirement to upload the file first.
To stream data using the REST API you must use to a POST method to indicate you want to start a stream data load. The body of the post should include the Essbase load rule name.
The response will include a URL to post the data to.
The data can then be streamed using the returned URL.
The response will include URLs to either stream more data or end the data load rule.
To end the data load, a DELETE method is required to the same URL.
If there were no errors, a successful message should be returned.
If I update the data to include an invalid member and run the data load again.
The response will indicate there were records rejected and the filename containing the errors.
This file will be available in the Essbase database directory.
An example of the error file is:
This file could be downloaded using the REST API if required.
An example of automating the stream data load method using a script could be:
I did have some fun trying to get the script to work as it needs to keep a web session active between the start and end of the streaming. I had to use “Invoke-WebRequest” where I generated a session variable and then used this in subsequent REST calls.
If you are interested in what is happening behind the scenes with the data load streaming method, here is an excerpt from the Essbase application log.
[DBNAME: GL] Received Command [StreamDataload] from user [john.goodwin]
[DBNAME: GL] Reading Rules From Rule Object For Database [GL]
[DBNAME: GL] Parallel dataload enabled: [2] block prepare threads, [1] block write threads.
[DBNAME: GL] Data Load Updated [21739] cells
[DBNAME: GL] [EXEC_TIME: 0.82] Data load completed successfully
Clear Active on User [john.goodwin] Instance [1]
If you don’t want to go down the streaming route, you could upload the file to the Essbase database directory using the REST API.
A PUT method is required to the following URL format which includes the name of the file and if you want to overwrite if it already exists:
This can simply be converted into a script.
After uploading you can then run a load job which I will cover shortly.
Going back to the Groovy option, which if available could be used to carry out all steps of the process to move data between EPM Cloud and OAC. As an example, I am going to use it for downloading the data file from EPM Cloud directly to the Essbase database directory in OAC.
In the Groovy script, variables are defined such as the EPM Cloud URL for downloading files, the data filename, location in OAC to download the file to. The user credentials are encrypted to create the basic authentication header for the REST call.
A method is then called to make the REST request and download the file.
The script should be saved with an extension of “gsh” and then uploaded to OAC.
The script can be run from the jobs in the UI.
The application/database and script can then be selected and the Groovy will then be run.
One of the disadvantages at the moment with Groovy in OAC is that parameters can not yet be passed into the script when running as a job.
After running the job, an output file will be available that contains the output of the “println” method in the script.
As the script was successful, the output file contains the following:
As this blog is all about automation we can run the Groovy script with the REST API.
A POST method is required to the jobs URL, the Groovy job type and script to run is included in JSON format in the body of the post.
The response includes detailed information about the job and a URL to keep checking the job status.
Once again this can be simply converted into a script to automate the process.
With a GET method, the status of the job can be checked with the jobs URL that contains the job ID.
A script can automatically keep checking the job status, this is a similar concept to the earlier example when checking the status of a Data Management job.
The file will have been downloaded directly from EPM Cloud to the Essbase database directory in OAC.
Finally, on to running the Essbase load rule to load the data contained in the file.
Using the REST API, it is the same call to the jobs URL. The only difference is the job type is “dataload” and parameters define the load rule and the data file.
The information returned in the response is similar to running any type of job.
The status of the job can be checked until it completes.
The beauty of running a data load job compared to streaming data is that the response includes the number of records that were processed and rejected.
This part of the process does not take much effort to convert into a script.
Now that the full process has been automated and run, the data from EPM cloud is available in OAC Essbase.
With scripting you can also automate the opposite process of extracting data from Essbase and then loading to EPM Cloud.
Once a script is in place it can be reused across different data flows by just changing variables.
If you are interested in understanding in more detail about how automation can help, please feel free to get in touch.