Monday, 14 January 2019

EPM Cloud - Journals and Data Management - Part 4

On to the final instalment of the series looking at Journals and Data Management. In summary, the previous posts covered:
  1. Overview of Journals in FCCS and the workflow process.
  2. Loading Journals through the simplified interface and rules enforcing the process.
  3. Loading Journals through Data Management using an Excel template.
In this final post I am going to go through the other method available in Data Management for loading journals, which is with a standard text file. As most of the groundwork has been covered in the previous posts and the concept is just like loading other files through Data Management, this post should be relatively quick and straightforward.

I am going to start out with a simple CSV file which is based on the journal I loaded previously. I have included a header in the file, so it is clear what is going to be loaded, the header will be automatically rejected during the import.


The file could include as many lines as required, I have only included a few lines for easy demo purposes.

Not all dimensions are contained within the file as these will be handled with data load mappings. Not all source members in the file are the same as the target, so these will also be dealt with using standard mappings.

It is worth pointing out again an important note from the documentation which needs to be considered when loading journals through Data Management:

“When loading journals to an Oracle Financial Consolidation and Close Cloud target, consider that Data Management does not determine the account types or select the credits/debits. All positive numbers are loaded as credits and all negative numbers are loaded as debits. If you need to designate other credit or debit signs for your account type, use the change sign feature in Data Load Mappings or another customized method to handle credits/debits changes for your journal loads.”

I am going to start out with the same import format that I created for loading the Excel journal file, it is possible to use a single import format for both.


When loading the Excel file there was no need to populate the import format mappings because they are driven from within the Excel file. This time I am going to be loading a CSV file, so the mappings need to be defined to map the source columns in the file map to the target dimensions.


Where there is a match between the source file and target the source column names were defined with their corresponding field numbers, the remaining mappings were left blank as these will be handled with data load mappings.

I am going to use the same location that was created for loading the Excel file, most of the values in the location are the ones created by default.


On to the load rule, the target cube was set, the source file uploaded and selected as the file name.


The important part is to set the load type to “Journal” in the target options, this is to distinguish between a standard data load and a journal load.


The journal status follows the same rules that I have previously been through, remember that:

“If the Journal Workflow is enabled, you can only import Working journals. If the Journal Workflow is disabled, you can import Working and Posted journals.”

For data load mappings, where members existed in the source file these were explicitly mapped, for the remaining one these were defined with like mappings, for example the data source dimension:


Just as when loading Journals through the simplified interface or the Excel template, it is possible to use the target member name or alias in the mapping.

It doesn’t actually matter what data source is mapped to, the following would still be valid.


This is because by default journals are always created against the seeded “FCCS_Journal Input” member. Though it is probably best to define the mapping to the correct member as at least it is clear what member it is being created against.

Let us run an import through the workbench.


The file was successfully imported and all dimensions correctly mapped.

This means an export can be run to export the data from Data Management and create a new journal in the target application.


In the process log there is an entry to confirm that the file is being generated in the same way as with the Excel template method.

DEBUG [AIF]: Created journal load data file: /u03/inbox/outbox/FCCS_212.jlf

If the journal file is downloaded from Data Management and opened, you can see it is based on the same format to that when loading journals through the simplified interface or the Excel template.


Now, as no label has been defined for the journal, it is automatically generated. This is based on the process ID.


The journal label format will be “JL” + Process ID.

In the target application, the journal has been created with a label following the above logic.


Opening the journals confirms the journal lines have been generated to match the source file after mappings have been applied.


What if you wanted to create your own label for the journal? Well this can be achieved by adding a “Journal Label Row” in the import format.


An expression can be defined which will be the name of the journal label.


In the workbench you can show the “Journal ID” column which is the journal label.


After mappings have been applied, each row of data will include the label. This is important as I will demonstrate shortly what happens if it is not included in each row.

After an export, the journal has been created with the same label that was defined in the import format.


At the moment, the journal has not been assigned to a group which is perfectly fine. If you want to assign a journal to a group, the journal label needs to follow the same format that I described in the previous post.

The format to include a label and a group is:

LABEL=<Label name>#GROUP=<Group name>


Please note that you must specify a label if you are going to include a group otherwise an error will be generated when creating the output journal file.

In the workbench, you can view the label and group by showing the “Journal ID” column.


Once exported, the journal includes the label and has been assigned to a group.


This method might be acceptable, but it is not very dynamic as it requires updating the import format every time you want to change the label or group.

To offer more flexibility it is possible to include the label and group in the source file.

I updated the file to include two columns for label and group. I have not defined them in the correct format as it is not the most user friendly. I want Data Management to produce the label and group output in the right format.


I have only assigned the label and group to one row in the file as I don’t want to have to keep duplicating the same values.

As I have kept the label and group separate and there is no group in the target I added a lookup dimension in the target application dimension details.


By doing this I will be able to load the source file to that dimension, apply a data load mapping to then take the group value and apply it to the label in the correct format. Lookup dimensions are not loaded to the target, so once the mappings logic has been applied it can be forgotten about.

The source file label and group columns can now be mapped in the import format.


If the source file is imported again you can view the label which is assigned to the “Journal ID” column and the group is separate.


Only one record currently contains the label and group. To be able to create a journal, we now need to join the label and group in the correct format, this also needs to be applied to each row of data.

To achieve this logic, I am going to add a SQL mapping to the group dimension as there is no mapping dimension available for journal ID.


First, I am going to demonstrate what happens if I only apply the label and group logic to a single row of data.

A SQL script is created, this script will map the target group (data column UD10) value to the same as the source group value. This is because the mapping is against the group dimension so requires a mapping even though the target group value is not important.

The script also updates the journal ID column based on the current value in the journal ID column and concatenates it with the group source value, the value that is generated is based on the required format of:

LABEL=<Label name>#GROUP=<Group name>


If I import and map the data again you can see there is an issue with this mapping.


As only one record of data contains the label and group, the blank record is still being generated in the label and group format. This will cause an error when generating the journal file.

This would not have been a problem if the source had both records populated but I wanted to avoid having to do that.

If I update the SQL mapping to only update the journal ID is there is a source value with the following:


This now creates the journal ID column in the correct format but for only one record.


An export will be successful.


Though the journal file that is generated is not what I am after.


Multiple journals have been defined within the file. A journal has been generated against the row of data which has no label and group. As there is no label it has been generated in the process ID format.

A second journal has also been generated in the file with the correct label, group and lines.

Within the application two journals have been created.


This is definitely incorrect as the intention was to create a single journal, but I wanted to demonstrate the reason why each record should include the label.

I updated the SQL mapping script so that it will select the record of data where the label exists and apply that to each journal ID row.


In the workbench, the label and group has been applied to each record in the correct format.


An export now creates a single journal with the desired label and assigns it to the required group.


If you do want to create multiple journals, then just make sure that the journal ID is assigned to the correct rows of data.


The above example creates two journals each with two item lines.


Up to now the examples have not included descriptions. Descriptions can be applied to the individual lines in a journal or at the journal level.

Let us go through an example of including descriptions on each line of a journal. I updated the source file to include a column for descriptions.


A description row is then added to the input format.


The target is created as “Description 1”, this will be used for individual journal line items.

The source column is mapped to the field number.


In the workbench it is possible to show the description column.


After a successful export from Data Management, the journal lines will be created with the description from the source file.


If you want to include a description at the journal level, add another description row in the import format.



This time the target will be created with “Description 2”, if you want to create a description at journal level then it will always need to be against “Description 2”.

If you don’t want to include descriptions for each line of the journal you can delete “Description 1” but still include “Description 2” for journal level descriptions.

For simplicity I have created the journal description in the import format expression. It can be generated from the source file by the same methods I have already gone through.


In the workbench the “Description-2” column can be displayed. The description will be applied to each data record.


The output journal file created during an export now has the description populated in the “!DESC” section.


The journal created in the target application includes the description.


So that covers as much as I feel I need to in this post and concludes the four-part look at journals and Data Management. I hope you have found it useful.

Monday, 7 January 2019

EPM Cloud - Journals and Data Management - Part 3

Moving on to part 3 in this series looking at Journals and Data Management. In the first part I provided an overview of Journals and went through in detail the workflow process. In the previous post I covered loading Journals through the simplified interface. In this post I am moving on to the options available for loading journals to FCCS through Data Management, the same can be achieved with on-premise FDMEE using hybrid functionality.

Data Management supports two methods for loading journals, this can be either through a standard text file or using an Excel template.

I am going to start with the Excel template and I am hoping the background provided in the previous posts will make it easier to understand. If you have ever loaded journals from on-premise FDMEE to HFM the concept is very similar.

The advantage of loading journals through the Excel template is that the template already exists and can be downloaded, there is no need to build a new template and generate the output in exactly the correct format that is required when importing through the simplified interface. There is also the benefit of being able to apply mappings as the source journal members may not be exactly matched to the ones in FCCS.

It is possible to use an existing location to import a journal Excel template, but I am going to begin with a new import format.

There is no need to fill out the mappings in the import format as they will be handled through the template.


I created the import format using a numeric data type, so the amount column will be available which is also in the Excel template.

A new location can then be created, which is assigned to the import format.


You will need a data load rule for the new location. There is no requirement to fill in the file name as it can be provided when importing the journal.


The important part is that the “Load Type” in the rules target options is set to “Journal”. Once this has been set, the workbench will have an option to load the journal.



There is an option to specify the journal status, this is where I disagree with the documentation.

The documentation states there are four options available for the journal status: Working, Submitted, Approved, Rejected

In the previous post I covered the options available and the rules are:

“If the Journal Workflow is enabled, you can only import Working journals. If the Journal Workflow is disabled, you can import Working and Posted journals.”

I will show that the rules still apply in Data Management when I get on to importing a journal.

To be able to import journals we will need the Excel template, this can be downloaded from the workbench by selecting “Download Template” and then “Journal”


The download will be named “Journal.xls” and is in the old Excel 97-2003 “xls” format, there is no problem opening the Excel template and saving it is a newer “xlsx” format.

The Excel template will be populated with a matching location, category and period to the one that was selected when downloaded from the workbench.


When importing the Journal Excel template, the location, category and period must match to the one selected in Data Management. I will demonstrate shortly what happens if this is not done.

The template contains all the necessary dimensions and by default includes two custom dimensions.

The template can be changed and not all columns need to be populated, though there are some additional rules that need to be followed.

The template contains hidden rows between 16 and 20. These are read and validated when the journal is imported.


In column I and rows 16-19 there is a formula that just points to the location, category, period and load method in column A.

You might think that row 15 is the one defining the dimensions, this is not the case and it is row 20 where that is done.

The following table describes the values that can be used for defining dimensions.


If you are unclear on what the custom dimension values should be then you can have a look at the target application and the data table column names.


For example, if you take the “Product” dimension, the column is “UD5” which maps to “5” in the Excel template, it would also be valid to assign it to “UD5” in the Excel template.

The template can be changed to fit your requirements or a new one created, but the above logic should be applied. The main driver to where metadata definitions and data exists in the template is controlled by a named range called “upsJournal”.


If you are creating your own template, then as long as the named range starts with “ups” you can name it what you like.

The named range should cover the metadata definition and the data rows.


By default, the Excel template is defined to show dollars in the amount column, you may be shocked to realise but not everybody operates in dollar amounts :)


The formatting of the amount column can be updated in the template and it should not affect the import.

So, let’s update the template with a simple journal example based on what I have used in previous posts.


Members for entity, intercompany and movement do not match the target application, these will be updated by using mappings in Data Management.

Multi-GAAP has been left empty as this will also be handled by mappings.

I updated “Custom2” to “Product”, remember it is the hidden row value in the named range that defines the dimension. I have hidden the “Custom1” column or alternatively it could be deleted as it is only optional.

It doesn’t look like you can assign a description for the journal in the template unless there is an undocumented feature. It is possible to provide description for the individual journal lines though.

There is an important note that needs to be understood when loading journal data, this is taken from the documentation:

“When loading journals to an Oracle Financial Consolidation and Close Cloud target, consider that Data Management does not determine the account types or select the credits/debits. All positive numbers are loaded as credits and all negative numbers are loaded as debits. If you need to designate other credit or debit signs for your account type, use the change sign feature in Data Load Mappings or another customized method to handle credits/debits changes for your journal loads.”

On to importing the journal. As the data load rule was defined as a journal in the target options there will be a “Load Journal” button available in the workbench.


Selecting “Load Journal” will open a new window where the journal file can be uploaded and then selected.


Before you can import the journal to Data Management it must be checked. A verification process is executed where the file is checked to make sure the POV matches to that currently selected in the workbench, the named range in the file is also validated.


If the file checks out a successful message will be displayed.


The “Post” button will then be enabled.


Clicking “Post” does not post the journal to FCCS, it only means it will be imported into Data Management.


If the file is loaded successfully it will be mapped as with standard Data Management functionality.

In the process logs you can see how the Excel file was processed.

INFO  [AIF]: EPMFDM-140274:Message - Adding Named Range: upsJournal
INFO  [AIF]: EPMFDM-140274:Message - Start Cell:B16
INFO  [AIF]: EPMFDM-140274:Message - End Cell:G33
INFO  [AIF]: EPMFDM-140274:Message - Period :2018-12-31
INFO  [AIF]: EPMFDM-140274:Message - Category :Actual
INFO  [AIF]: EPMFDM-140274:Message - Location :Journal_Example
INFO  [AIF]: EPMFDM-140274:Message - JournalId :Sales Adjustment
INFO  [AIF]: EPMFDM-140274:Message - Using LoadMethod :REPLACE
INFO  [AIF]: EPMFDM-140274:Message - Extracting metadata: End
INFO  [AIF]: EPMFDM-140274:Message - Cell: B22 ColumnName: ACCOUNT Value:Operating Sales
INFO  [AIF]: EPMFDM-140274:Message - Cell: C22 ColumnName: ENTITY Value:13
INFO  [AIF]: EPMFDM-140274:Message - Cell: D22 ColumnName: ICP Value:No
INFO  [AIF]: EPMFDM-140274:Message - Cell: E22 ColumnName: UD3 Value:Net Income
INFO  [AIF]: EPMFDM-140274:Message - Cell: H22 ColumnName: UD5 Value:Smart Televisions
INFO  [AIF]: EPMFDM-140274:Message - Cell: I22 ColumnName: AMOUNT Value:2000
INFO  [AIF]: EPMFDM-140274:Message - Cell: B23 ColumnName: ACCOUNT Value:Returns and Allowances
INFO  [AIF]: EPMFDM-140274:Message - Cell: C23 ColumnName: ENTITY Value:13
INFO  [AIF]: EPMFDM-140274:Message - Cell: D23 ColumnName: ICP Value:No
INFO  [AIF]: EPMFDM-140274:Message - Cell: E23 ColumnName: UD3 Value:Net Income
INFO  [AIF]: EPMFDM-140274:Message - Cell: H23 ColumnName: UD5 Value:Smart Televisions
INFO  [AIF]: EPMFDM-140274:Message - Cell: I23 ColumnName: AMOUNT Value:-2000

Before moving on I think it is worth trying to break some of the validation rules when importing the journal.

The Excel file was updated to a different location than the one currently selected.


An error message is displayed information the location does not match.


Now for an unmatched period.


As expected an error message is displayed to inform the period does not match the one currently selected.


It is shame that even though the template states to enter period name this does not seem to be correct.


The Excel file matches to the period name in the POV bar.


The same error is generated.



It looks like it must always match the period key and not the period name.


Anyway, back to the import that was successful.


Mappings have been applied and the validation step is shown as successful.

It is important that the “Data Source” dimension is mapped to the journal input member.


Just as when importing Journals through the simplified interface, it is possible to map to a target member name or alias.


At this point in the process the journal has not been created in the FCCS application.


An export can be run which will push the journal to the target application.


As the export was successful, the journal has been created.


Opening the journal confirms all the lines have been correctly loaded.


If you look at the process log generated in Data Management, it provides an indication to how the journal is created.

DEBUG [AIF]: Created journal load data file: /u03/inbox/outbox/FCCS_152.jlf

A journal “jlf” file is generated in the outbox before it is loaded to the target application.


If the file is downloaded and opened, you will notice the file is the same format as when importing journals through the simplified interface which I covered in the previous post.


So basically, all that Data Management is doing behind the scenes is producing the same formatted journal file and then importing it into the application.

If you want to assign the journal to a group, the Excel template will need to include a label and group definition like the following:


In the workbench it is possible to show the “Journal ID” column which is where the information from the Excel template is loaded to.


In the journal file created by Data Management you can see the journal section now includes the group.


When pushed to the target application the journal is assigned to the group.


Now it is time to break some of the rules for creating journals to see how they are handled in Data Management.

As I mentioned earlier, the documentation states the journal status can be working, submitted, approved or rejected, which I don’t agree with.

If I update the journal status to “Submitted” and run an export...


the export fails.


The process log highlights what the issue is.

INFO  [AIF]: Executing journal data load to FCCS
DEBUG [AIF]: Executing request...
Parsing Journal file...
Error at line 5 : Only working(W) and posted(P) journals are allowed. "!JOURNAL=Sales Adjustment,,Submitted"
Parsing completed with errors...
ERROR [AIF]: Error loading journal data file: Error

As you would expect the rules follow the same logic as when importing through the simplified interface. When workflow is enabled, only working journals can be created. If workflow is not enabled, they can be working or posted.

How about running an export with a user that does not have access to members in journal?


The export fails, and the process log provides the error was due to invalid access levels.

INFO  [AIF]: Executing journal data load to FCCS
DEBUG [AIF]: Executing request...
Parsing Journal file...
Parsing completed successfully...
Importing Journals...
Saving Journals for POV [Scenario=Actual,Year=FY18,Period=Dec] starts
Sales Adjustment
Error: You do not have edit access to the journal.
Saving Journals for POV [Scenario=Actual,Year=FY18,Period=Dec] ends
Journal import completed with error(s).

Next for an application that does not have workflow enabled, the status was updated to posted.


The POV was set to a period which is not open, the export fails.


No surprise that the process log now contains the error about an unopened period.

INFO  [AIF]: Executing journal data load to FCCS
DEBUG [AIF]: Executing request...
Parsing Journal file...
Parsing completed successfully...
Importing Journals...
Saving Journals for POV [Scenario=Actual,Year=FY18,Period=Dec] starts
Sales Adjustment
Error: Action Post cannot be performed on the journal with label Sales Adjustment as the Period is not open.
Saving Journals for POV [Scenario=Actual,Year=FY18,Period=Dec] ends
Journal import completed with error(s).

Even though the process fails it follows the same logic as importing through the simplified interface, the journal will be created but with a status of working.


This time I opened the period and successfully ran the export again in Data Management.


The journal has been created and posted within the application.


There is not mention of “Posted” journal status in the Data Management documentation but you can see the output file is created with “P” for posted.


How about trying to post an unbalanced journal?


Once again, the export fails.


The log informs that the journal cannot be posted as it failed the scan.

INFO  [AIF]: Executing journal data load to FCCS
DEBUG [AIF]: Executing request...
Parsing Journal file...
Parsing completed successfully...
Importing Journals...
Saving Journals for POV [Scenario=Actual,Year=FY18,Period=Dec] starts
Sales Adjustment
Error: Action Post cannot be performed on the journal due to scan error(s).
Saving Journals for POV [Scenario=Actual,Year=FY18,Period=Dec] ends
Journal import completed with error(s).

The journal will still be created with a working status.


If a scan is run on the journal in the application, it will fail as it is unbalanced.


I think that covers enough detail about loading Excel-based journals through Data Management. At the beginning of this blog post I said that there are two methods available for loading journals through Data Management. In the next and final part, I will go through loading journals using a standard text file.