Monday, 18 November 2019

EPM Cloud - Integration Agent Part 7 - Drill down to source data

Time for the latest part of the EPM Cloud Integration Agent series, in which I am going to be looking at drill down functionality. FDMEE/Data Management has always provided the option to drill through to source data using a URL and passing in parameters, for example with source systems like Oracle Financials Cloud, E-Business Suite or SAP. If your source was a relational database, then there was not really an option to directly drill back to the data.

With the Integration Agent, it is now possible to select to drill through to source, this will execute a predefined SQL query against the source database and display the results in Data Integration.

The following diagram is my interpretation of how the drill down functionality operates.

  • Drill to source is initiated in either Data Integration workbench or drill through landing page.
  • A REST call is made from a browser to the Integration Agent.
  • The REST call contains the encrypted SQL query, bind variables and connection information.
  • The agent updates the query with the bind variables values, the query is executed against the source database.
  • The results of the query are converted to JSON and returned to the browser.
  • The JSON is converted into a table format and displayed in a browser window.
To be able to use the drill down functionality you will really need to access the agent over HTTPS which will be through either a load balancer or web server. The URL the browser will make the REST call to will be the Web URL, which is defined in Data Integration. This will not be a problem if you are using the agent in synchronous mode as it will already have the Web URL defined and be accessible. Please refer to part one and two of the series for a better understanding.


If you are using asynchronous mode, then the web server or load balancer does not have to be internet facing. The call is being made from the browser, so it is just the machine the browser is making the call from that needs to be able to access it.

For my testing, I installed and configured Apache HTTP Server on an internal network which proxied to the agent. The web server does not have to be located on the same machine as the agent. It may be that you already have an internal load balancer or web server that could be configured to proxy to the agent.


You may be asking why you can’t just access the agent directly over HTTP and not through a web server.


Accessing EPM Cloud through a browser will be over HTTPS, if the browser then tries to make a call over HTTP to the agent then this will be a security issue, most browsers will block mixed content.


It is possible to disable mixed content blocking but depending on the browser you will probably encounter more issues. For many reasons, it makes sense just to front the agent with a web server of some description.

To demonstrate the drill down functionality, I am going to use my example from part three of this series where data is extracted from a source Oracle database.


A database view provides summary General Ledger data and the query to extract the data is defined in Data Integration.


In the Data Integration workbench, there is the option “Drill Through to Source” on each source amount.


For this example, the idea is to drill down to a database table which contains the detailed GL data.


Please note this is just a demo, the source table could contain many more columns than shown.

The first step is to create the query in Data Integration which will return the required data set.


The query also includes bind variables, the values will depend on the row where the drill is being selected.

What you need to be mindful of is the amount of records that can be returned. As the results are returned to the browser you will want to limit them as much as possible. The functionality is not really designed to dump large amounts of data to a browser.

Next, the Drill URL is defined. This is either through “Map Dimensions” in Data Integration or “Import Format” in Data Management.


The Drill URL will need to start with #agent when accessing a drill down with the Integration Agent.

The QUERY parameter is the name of the SQL Query. I have included the query which I just created.

The other parameters are the bind variables which I assigned in the Query. The values are the columns in the data staging tables.

The URL I have defined will be with the agent, it will run the “GL_BALANCES” query. The bind variables will be based on the ACCOUNT and ENTITY values for the selected drill row.

I can now select to “Drill Through to Source” on a row in the workbench.


This will then make the call to the agent; the bind variables will be assigned to the query and then executed. The results will be returned and displayed in a window.


It would be nice if there was an option to download the results.

If you do have any issues with the drill down functionality, you will more than likely see the following default error.


Looking at the agent logs or using developer functionality within the browser will help narrow down the issue.

You may be wondering whether it is possible to drill back from a web form or Smart View and the answer is yes.


Selecting “Drill Through to source” will open the standard landing page.


The gear icon next to the amount can be selected and then “Drill Through to Source” again. A window will be displayed with the results.


I must admit the options to resize the window are not great.

Please note, this functionality does not work from the workbench in Data Management.

When a drill down is executed there will be an entry in the window of the agent.


The session ID number has a corresponding log entry.


The log contains information similar to running a normal data extract. The information includes the data source, query, bind variables and values.

The difference to a standard extract is the output type, instead of file it is JSON.

16:01:49:500 :: *************** Starting execution for sessionId:27387043 at: 16:01:49:500***************
16:01:49:500 :: Process Extract Request with data:--------------Executor Request Data--------------
 jobType:EXTRACT
 sessionID:27387043
 Data Source Name:EPMAGENT_ORA_DEMO
 credStore:CLOUD
 query:SELECT 
ACC_PER,LEDGER,JNL_ID,ACCOUNT,LOB,CC,
PRODUCT,IC,BEG_BAL,PER_ACT,END_BAL 
FROM GL_FACT 
WHERE ACCOUNT=~ACC~ AND CC=~ENT~
 outputType:JSON
 delimiter:,
--------------Bind Variables--------------
Name: ACC, Value:5800, Type:STRING
Name: ENT, Value:410, Type:STRING
--------------------------------------------
16:01:49:500 :: --------------- Executing Custom Event :BefExtract ---------------
16:01:49:500 :: Script: E:\EPMAgent\appdata\scripts\BefExtract.py Not found.
16:01:49:532 :: Adding String Bind Variable: ACC with value: 5800
16:01:49:532 :: Adding String Bind Variable: ENT with value: 410
16:01:49:532 :: --------------- Completed execution for sessionId:27387043 ---------------
16:01:49:532 :: --------------- Executing Custom Event :AftExtract ---------------
16:01:49:532 :: Script: E:\EPMAgent\appdata\scripts\AftExtract.py Not found.
16:01:49:532 :: --------------- Executing Custom Event :BefUpload ---------------
16:01:49:532 :: Script: E:\EPMAgent\appdata\scripts\BefUpload.py Not found.
16:01:49:532 :: --------------- Executing Custom Event :AftUpload ---------------
16:01:49:532 :: Script: E:\EPMAgent\appdata\scripts\AftUpload.py Not found.
16:01:49:937 :: *************** End execution for sessionId:27387043 ***************

The log also highlights that the event scripts are executed. I will provide an example with an event script shortly.

I believe the log is appended to each time a drill is executed. A new session log will be created when the agent is restarted.

In my previous example, all periods are being returned in the query results. There are different options available to filter down to a single period and I will now go through a couple of them.

As columns from the staging data table can be included in the Drill URL, it is possible to return the period key that the data has been loaded to.


An example of the format the PERIODKEY is returned:

2019-11-30 00:00:00.0

The format for the period in the source database table is MM-YY. By using a substring function I can convert the period key into the correct format.


This time the results returned will only be for the period in the POV.


Another option would be to override the SQL in the before extract event script.

The following script generates the current month and year in the correct format. The original query is then updated to filter on the period.


Yes, this only works if you always want to return data for the current month and year, but it is just an example of how to override the query. The script can be written to suit whatever the requirements are.

The output from the agent displays the original query and the executed query which has the period included.


Another option could be when the data is extracted and loaded to EPM Cloud, it could include the period which is loaded to an attribute column.

Instead of loading the period to an attribute column I am going to change it a little. In the next example I want to drill down to a journal table by journal ID. The query for the data extract will include the journal ID which I will load to an attribute.

To do this I am going to update the integration which extracts summary data using a database view.

A new column for the journal ID is added to the data source application.


In the integration dimension mappings, journal ID is mapped to an attribute.


The query to extract summary data is updated to include the journal ID column.


When the integration is run and the data has been extracted and loaded to EPM Cloud, the journal ID is loaded to the attribute column.


I couldn’t get the attribute column to display in the Data Integration workbench, so I have provided the above example using Data Management.

The next step is to set up the drill functionality to display records from a table which has journal line information.


A new Query is created which returns journal line records. Some of the columns have been aliased to return more meaningful names.


The above query includes a bind variable to filter by journal ID.

The Drill URL is updated to include the query name, bind variable and attribute column name.


Now the drill down can be tested.


The journal line records are returned and displayed in a window.


If drilling back from a web form or Smart View, the landing page provides the ability to display the journal ID column.


I think that covers everything I wanted to. Until the next instalment…

Monday, 4 November 2019

EPM Cloud - Integration Agent Part 6 - Multiple agents and cluster assignments

Time for the next instalment in the series looking at the EPM Cloud Integration Agent. In this post I will look at running multiple agents and then move on to cluster assignments. I am going to assume that you have been following my previous posts on the agent and you understand the concept of agents and clusters. If not, I recommend reading through the first two parts.

Up to now, my examples have been using a single agent which has been assigned to either an asynchronous or synchronous cluster. To provide high availability and load balancing options you may wish to run multiple agents which can be assigned to one or more clusters.  The agents could coexist on a single server or be spread across numerous servers. Depending on where the source data resides, the agents and clusters could be deployed across different geographical locations.

In this first example I am going to run two agents on one server and another two on a server in a different location. They will all be assigned to a single asynchronous cluster.

For simplicity, I am going to name the agents from AGENT1 through to AGENT4.

An asynchronous cluster has been created in Data Integration which the agents will be registered to.


The first agent is configured using an initialisation file and assigned to the above. Please refer to part two of this series for more details on all the parameters in the ini file.


The agent is started, registers to EPM Cloud and starts polling the job queue. Notice the agent is running on machine 1.


For the second agent, the ini file is copied to a new name. The agent name is set as AGENT2 and because the first agent will be occupying port 9090, the port for second agent is updated to 9095.


The agent is started and registers successfully, then starts polling.


Within Data Integration you can see that both the agents have been registered.


There is a slight difference that the agent uses a 24 hour clock format while Data Integration displays a 12 hour format.

On to the second server. The third agent is set up a similar way to the first agent. The only difference is the agent name.


The fourth agent is set up, once again the port has been updated to avoid a clash.


In Data Integration, the four agents are now registered across two servers.


When using asynchronous mode, the agent to first to make a poll to EPM Cloud after an integration has been initiated will be the agent that runs the process.

To demonstrate this, the first agent was started with a 60 second polling interval.


The second agent was started around 30 seconds later, again with a 60 second polling interval.


An integration was started.


AGENT1 makes the first poll after the integration has been executed so this agent runs the process.


AGENT2 makes a poll next and, as there have been no additional integrations started, it does nothing until the next poll interval.


If an integration had been started, then the next agent to poll would start the integration process.

In Data Integration, the queue shows that the first agent processed the integration.


If frequent integrations are being executed, then running multiple agents helps spread out the processing load between the agents. Also, if any of the agents are down or need restarting or if one of the servers is out of action, then it does not impact the integrations being run from the Cloud.

Moving on to running multiple agents in synchronous mode.

In the next example I am going to run three agents on the same server, but these could as easily be spread across multiple servers and locations.

A cluster has been created in Data Integration which operates in synchronous mode.


The first agent is configured using the same initialisation file as the previous example, but set to register with the synchronous cluster.


The second is set to register in sync mode and run on a different port.


The third agent, still running on the same machine but on a free port, registers in sync mode.


The registered agents can be viewed in Data Integration against the synchronous cluster.


As this is synchronous mode, the agents must be accessible through an internet facing URL. Just like in part two of this series, I configured a load balancer which was available over HTTPS through an internet facing URL. The load balancer was configured to only allow HTTPS traffic from Oracle Cloud. Depending on the agent URL, the load balancer would proxy through to the correct agent. The machine running the agents only accepts traffic from the load balancer on the designated ports.

In synchronous mode, agents are assigned jobs in a round robin technique using the order they are registered.

To demonstrate this, I registered AGENT3 first, then AGENT2 and then finally AGENT1.

An integration was then run. In the process log you can see the cluster and mode are retrieved, the next agent in the queue is assigned.

INFO  [AIF]: Retrieved EPM Cluster name:EPMAGENT_SYNC
INFO  [AIF]: Retrieved Cluster Mode:SYNC
INFO  [AIF]: Calling agent extract SYNC mode: BEGIN
INFO  [AIF]: Getting agent name from queue...
INFO  [AIF]: Cluster Name:EPMAGENT_SYNC
INFO  [AIF]: Retrieved EPM Agent name:AGENT3

As AGENT3 was the first agent to be registered, the integration is run against it.


The integration is run again, the process log again shows which agent is going to be assigned to process the integration.

INFO  [AIF]: Getting agent name from queue...
INFO  [AIF]: Cluster Name:EPMAGENT_SYNC
INFO  [AIF]: Retrieved EPM Agent name:AGENT2

As expected, the next in line is AGENT2.


Three agents have been registered so let us run the integration again and check the process log.

INFO  [AIF]: Getting agent name from queue...
INFO  [AIF]: Cluster Name:EPMAGENT_SYNC
INFO  [AIF]: Retrieved EPM Agent name:AGENT1

No shock that AGENT1 is assigned the integration.


In Data Integration you can see the queue order for the processes that have just been run.


To be complete, the integration is run one final time and it is back to AGENT3 processing the integration.


That covers multiple agents running in both asynchronous and synchronous modes.

Time to move on to cluster assignments, which provide the ability to assign integrations to different clusters. They can be assigned at either Integration (data rule), Location or Target application level.

As an example, I have created two clusters. One cluster will be designated to run integrations where the source is relational databases, the other will be for cloud data sources.


It doesn’t matter if the clusters are synchronous or asynchronous. The clusters can be registered with one or more agents and will operate in the same way as my previous examples.

I am going to keep it as simple as possible and there will be one agent registered to each cluster. The agents will coexist on a single machine. The clusters have been defined in asynchronous mode.

The cloud agent is configured with the initialisation file to point to the cloud cluster.


The agent is started and registers successfully.


The SQL agent is configured to the SQL cluster.


The agent is started and registers successfully.


In Data Integration, the cloud agent is shown to be registered to the cloud cluster.


The SQL agent is shown to be registered to the SQL cluster.


Within the agent cluster page there is an assignments tab.


The options available are to assign a type and value. Assignments can also be added or deleted.


The type defined the level of integration to assign to the cluster. These can be set at either Application, Integration (data rule) or Location level.

For this first assignment I am going to set it at Location level. Once this has been selected, the available locations are can be selected from the entity dropdown.


I selected “EPM_AGENT_ORA” which is an integration with a source Oracle database. The integration was covered in part three of the series.

I added another assignment at integration (data rule level) which extracts workforce data from a source SQL server database. This integration was covered in part four of the series.


This means if any integrations are run at the defined location or integration level, they can be processed by the agents that have been registered to the SQL cluster.

For the cloud cluster I added an assignment at location level.


This location includes an integration which extracts exchange rates using an API and was covered in part five.

Now the assignments have been applied, I ran the integration to extract exchange rates.


In the process log you can see the cluster assignments are retrieved and integration is assigned to the cloud cluster.

22:12:26,732 INFO  [AIF]: Getting agent name from assignments...
22:12:26,732 INFO  [AIF]: Getting agent from cluster assignments using Rule Id:77 Location Id:77 Source AppId:68
22:12:26,733 INFO  [AIF]: Retrieved EPM Cluster name:CLOUD_CLUSTER
22:12:26,734 INFO  [AIF]: Retrieved Cluster Mode:ASYNC
22:12:26,734 INFO  [AIF]: Calling agent extract ASYNC mode: BEGIN

The queue also shows that the integration has been assigned to the cloud cluster. As the cluster is set up in asynchronous mode, there is no agent shown yet until an agent from the cluster makes a poll to EPM Cloud.

If the cluster was defined in synchronous mode, an agent would be assigned and the integration would be run directly against that agent.


The SQL agent makes a poll to EPM Cloud first. As this agent is not registered to the cloud cluster, no job details are returned from the Cloud and no action will be taken.


As the cloud agent is registered to the cloud cluster, the next time it makes a poll to the queue, job details are returned, and the agent executes the integration process.


Next to run an integration which extracts data from a source Oracle database. The location the integration is part of has been assigned to the SQL cluster.


The queue shows the integration has been assigned to the SQL cluster.


The SQL agent is registered to the SQL cluster, next time the agent makes a poll to the queue, job details are returned, and the agent processes the integration.


Now on to running the integration which extracts workforce database from a source SQL server database.


This integration has been assigned to the SQL cluster so appears in the queue.


The next time the SQL agent makes a poll, job details will be returned, and the agent will process the integration.


Assignment types have an order of precedence when integrations are initiated. Integration takes precedence over location, location takes precedence over an application.

To demonstrate this, I added the integration that extracts data from a source Oracle database to the cloud cluster.


This means the integration is assigned to the cloud cluster and the location that the integration is part of is assigned to the SQL cluster.

The integration was run again.


This time the integration is added to the cloud cluster queue. This is because integration takes precedence over location.


The cloud agent then runs the integration process.


So what happens if an integration is run that does not fall under any of the assignments?

I removed the workforce extract integration from the SQL cluster.


This means the integration is not covered by any of the assignments.

When the integration is run it instantly fails.


Checking the process log highlights that no cluster assignments were found so the process fails.

INFO  [AIF]: Getting agent name from assignments...
INFO  [AIF]: Getting agent from cluster assignments using Rule Id:75 Location Id:76 Source AppId:62
INFO  [AIF]: Retrieved EPM Cluster name:null
ERROR [AIF]: Cluster not found for process id:1176
ERROR [AIF]: Unexpected error in importData:Cluster not found for process id: 1176

I think that should cover enough detail about running multiple agents and cluster assignments.

Coming up next, drill down to source data.