The data imported into Fluence will vary from customer to customer. This is because the data files vary widely depending on the data loaded, the source systems it is coming from, and the information included in the data file.
- The file produced by the source system may not include the time period. In this case, we need to use the Data Import Definition to define which time period it should be loaded to.
- The source system may provide a file that uses different entity names than the member names used in Fluence, in this case we need to set up Data Mappings.
This section covers importing a csv or Excel file in Fluence using a Workflow Task.
The following diagram illustrates the overall Data Import Process at a high level:
There are a few moving parts that need to be configured to create a working Data Import:
- Staging Tables to temporarily store the loaded data before mapping and uploading the data to the Fact Tables
- This is performed in the Staging Table interface.
- Data Import Definitions to define the Import (a unique name, how to handle missing fields such as the Time Period, etc..)
- This is performed in the Data Import Definitions editor (incl. Field Mappings, Clear Data Settings)
- Data Mappings to specify member mappings for each dimension if required (how to handle member names that do not match the member names within Fluence)
- This is performed in the Maps editor
- Workflows to perform the Imports
- This is performed in the Workflow editor.
In addition, you will need to have a sample of the import file in order to set up the mappings and Data Import correctly.