Monday 14 January 2019

EPM Cloud - Journals and Data Management - Part 4

On to the final instalment of the series looking at Journals and Data Management. In summary, the previous posts covered:
  1. Overview of Journals in FCCS and the workflow process.
  2. Loading Journals through the simplified interface and rules enforcing the process.
  3. Loading Journals through Data Management using an Excel template.
In this final post I am going to go through the other method available in Data Management for loading journals, which is with a standard text file. As most of the groundwork has been covered in the previous posts and the concept is just like loading other files through Data Management, this post should be relatively quick and straightforward.

I am going to start out with a simple CSV file which is based on the journal I loaded previously. I have included a header in the file, so it is clear what is going to be loaded, the header will be automatically rejected during the import.


The file could include as many lines as required, I have only included a few lines for easy demo purposes.

Not all dimensions are contained within the file as these will be handled with data load mappings. Not all source members in the file are the same as the target, so these will also be dealt with using standard mappings.

It is worth pointing out again an important note from the documentation which needs to be considered when loading journals through Data Management:

“When loading journals to an Oracle Financial Consolidation and Close Cloud target, consider that Data Management does not determine the account types or select the credits/debits. All positive numbers are loaded as credits and all negative numbers are loaded as debits. If you need to designate other credit or debit signs for your account type, use the change sign feature in Data Load Mappings or another customized method to handle credits/debits changes for your journal loads.”

I am going to start out with the same import format that I created for loading the Excel journal file, it is possible to use a single import format for both.


When loading the Excel file there was no need to populate the import format mappings because they are driven from within the Excel file. This time I am going to be loading a CSV file, so the mappings need to be defined to map the source columns in the file map to the target dimensions.


Where there is a match between the source file and target the source column names were defined with their corresponding field numbers, the remaining mappings were left blank as these will be handled with data load mappings.

I am going to use the same location that was created for loading the Excel file, most of the values in the location are the ones created by default.


On to the load rule, the target cube was set, the source file uploaded and selected as the file name.


The important part is to set the load type to “Journal” in the target options, this is to distinguish between a standard data load and a journal load.


The journal status follows the same rules that I have previously been through, remember that:

“If the Journal Workflow is enabled, you can only import Working journals. If the Journal Workflow is disabled, you can import Working and Posted journals.”

For data load mappings, where members existed in the source file these were explicitly mapped, for the remaining one these were defined with like mappings, for example the data source dimension:


Just as when loading Journals through the simplified interface or the Excel template, it is possible to use the target member name or alias in the mapping.

It doesn’t actually matter what data source is mapped to, the following would still be valid.


This is because by default journals are always created against the seeded “FCCS_Journal Input” member. Though it is probably best to define the mapping to the correct member as at least it is clear what member it is being created against.

Let us run an import through the workbench.


The file was successfully imported and all dimensions correctly mapped.

This means an export can be run to export the data from Data Management and create a new journal in the target application.


In the process log there is an entry to confirm that the file is being generated in the same way as with the Excel template method.

DEBUG [AIF]: Created journal load data file: /u03/inbox/outbox/FCCS_212.jlf

If the journal file is downloaded from Data Management and opened, you can see it is based on the same format to that when loading journals through the simplified interface or the Excel template.


Now, as no label has been defined for the journal, it is automatically generated. This is based on the process ID.


The journal label format will be “JL” + Process ID.

In the target application, the journal has been created with a label following the above logic.


Opening the journals confirms the journal lines have been generated to match the source file after mappings have been applied.


What if you wanted to create your own label for the journal? Well this can be achieved by adding a “Journal Label Row” in the import format.


An expression can be defined which will be the name of the journal label.


In the workbench you can show the “Journal ID” column which is the journal label.


After mappings have been applied, each row of data will include the label. This is important as I will demonstrate shortly what happens if it is not included in each row.

After an export, the journal has been created with the same label that was defined in the import format.


At the moment, the journal has not been assigned to a group which is perfectly fine. If you want to assign a journal to a group, the journal label needs to follow the same format that I described in the previous post.

The format to include a label and a group is:

LABEL=<Label name>#GROUP=<Group name>


Please note that you must specify a label if you are going to include a group otherwise an error will be generated when creating the output journal file.

In the workbench, you can view the label and group by showing the “Journal ID” column.


Once exported, the journal includes the label and has been assigned to a group.


This method might be acceptable, but it is not very dynamic as it requires updating the import format every time you want to change the label or group.

To offer more flexibility it is possible to include the label and group in the source file.

I updated the file to include two columns for label and group. I have not defined them in the correct format as it is not the most user friendly. I want Data Management to produce the label and group output in the right format.


I have only assigned the label and group to one row in the file as I don’t want to have to keep duplicating the same values.

As I have kept the label and group separate and there is no group in the target I added a lookup dimension in the target application dimension details.


By doing this I will be able to load the source file to that dimension, apply a data load mapping to then take the group value and apply it to the label in the correct format. Lookup dimensions are not loaded to the target, so once the mappings logic has been applied it can be forgotten about.

The source file label and group columns can now be mapped in the import format.


If the source file is imported again you can view the label which is assigned to the “Journal ID” column and the group is separate.


Only one record currently contains the label and group. To be able to create a journal, we now need to join the label and group in the correct format, this also needs to be applied to each row of data.

To achieve this logic, I am going to add a SQL mapping to the group dimension as there is no mapping dimension available for journal ID.


First, I am going to demonstrate what happens if I only apply the label and group logic to a single row of data.

A SQL script is created, this script will map the target group (data column UD10) value to the same as the source group value. This is because the mapping is against the group dimension so requires a mapping even though the target group value is not important.

The script also updates the journal ID column based on the current value in the journal ID column and concatenates it with the group source value, the value that is generated is based on the required format of:

LABEL=<Label name>#GROUP=<Group name>


If I import and map the data again you can see there is an issue with this mapping.


As only one record of data contains the label and group, the blank record is still being generated in the label and group format. This will cause an error when generating the journal file.

This would not have been a problem if the source had both records populated but I wanted to avoid having to do that.

If I update the SQL mapping to only update the journal ID is there is a source value with the following:


This now creates the journal ID column in the correct format but for only one record.


An export will be successful.


Though the journal file that is generated is not what I am after.


Multiple journals have been defined within the file. A journal has been generated against the row of data which has no label and group. As there is no label it has been generated in the process ID format.

A second journal has also been generated in the file with the correct label, group and lines.

Within the application two journals have been created.


This is definitely incorrect as the intention was to create a single journal, but I wanted to demonstrate the reason why each record should include the label.

I updated the SQL mapping script so that it will select the record of data where the label exists and apply that to each journal ID row.


In the workbench, the label and group has been applied to each record in the correct format.


An export now creates a single journal with the desired label and assigns it to the required group.


If you do want to create multiple journals, then just make sure that the journal ID is assigned to the correct rows of data.


The above example creates two journals each with two item lines.


Up to now the examples have not included descriptions. Descriptions can be applied to the individual lines in a journal or at the journal level.

Let us go through an example of including descriptions on each line of a journal. I updated the source file to include a column for descriptions.


A description row is then added to the input format.


The target is created as “Description 1”, this will be used for individual journal line items.

The source column is mapped to the field number.


In the workbench it is possible to show the description column.


After a successful export from Data Management, the journal lines will be created with the description from the source file.


If you want to include a description at the journal level, add another description row in the import format.



This time the target will be created with “Description 2”, if you want to create a description at journal level then it will always need to be against “Description 2”.

If you don’t want to include descriptions for each line of the journal you can delete “Description 1” but still include “Description 2” for journal level descriptions.

For simplicity I have created the journal description in the import format expression. It can be generated from the source file by the same methods I have already gone through.


In the workbench the “Description-2” column can be displayed. The description will be applied to each data record.


The output journal file created during an export now has the description populated in the “!DESC” section.


The journal created in the target application includes the description.


So that covers as much as I feel I need to in this post and concludes the four-part look at journals and Data Management. I hope you have found it useful.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.