Skip to content

GoogleCloudPlatform/cortex-data-foundation

Repository files navigation

Google Cloud Cortex Framework

Google Cloud Cortex Framework helps you accelerate business insights and outcomes with less risk, complexity, and cost with reference architectures, packaged solution deployment content, and integration services to kickstart your data and AI cloud journey.

About Cortex Data Foundation

Cortex Data Foundation is the core architectual component of the Cortex Framework reference architecture and provides packaged analytics artifacts which can be automatically deployed for use with Google Cloud BigQuery.

Cortex framework

This repository contains predfined analytical views and models to accelerate the build out of an enterprise wide data foundation in BigQuery. The data sources and models available are depicted in the entity-relationship diagrams below:

Operational related data sources

Marketing related data sources

Sustainability related data sources

Quick demo deployment

To explore this content you can create a demo instance, with automatic generation of BigQuery datasets, automatic permission granting and sample test data, with a click of a button here:

Open in Cloud Shell

Warning This demo deployment is not suitable for production environments.

Deployment for development or production environments

Clone the Data Foundation repository

We recommend using the Cloud Shell.

Note These steps will require the gcloud sdk (already installed in Cloud Shell).

  1. Clone this repository with submodules (--recurse-submodules):

    git clone --recurse-submodules https://github.com/GoogleCloudPlatform/cortex-data-foundation
  2. Navigate into the previously downloaded folder.

    cd cortex-data-foundation

    If this is not the first time you have cloned the repository, execute git pull --recurse-submodules to pull the latest changes. If you are already familair with the configuration and requirements, you can skip to the build command in section Execute deployment.

Deployment steps

These are the steps for deployment:

  1. Prerequisites
  2. Establish project and dataset structure
  3. Establish integration mechanism
  4. Configure cloud platform components
  5. Configure deployment
  6. Execute deployment
  7. Test, customize, prepare for upgrade
  8. Next steps for reporting and add-ons

Prerequisites

Understand your business requirements and the technical components of the Framework

A successful deployment depends on a good understanding of:

  • Your company's business rules and requirements
  • Functional understanding of the data workload (e.g. SAP, Salesforce, Meta, etc)
  • Knowledge of Google Cloud and data and AI products

Before continuing, make sure you are familiar with:

Establish project and dataset structure

You will require at least one Google Cloud project to host the BigQuery datasets and execute the deployment process.

This is where the deployment process will trigger Cloud Build runs. In the project structure, we refer to this as the Source Project. Each workload should have at least one Change Data Capture (CDC) and one Reporting dataset (i.e. one CDC and one Reporting dataset for SAP, one CDC and one Reporting dataset for Salesforce).

structure for parameters

Note If you want to have separate sets of projects and datasets for each workload (e.g. one set of source and target projects for SAP and a different set of target and source projects for Salesforce), run separate deployments for each workload.

You will need to identify:

  • Deploy SAP, Salesforce, etc: Decide whether you want to deploy models for all workloads or only one set of models.
  • Source Google Cloud Project: Project where the source data is located, from which the data models will consume. This project is normally accessed by technical practitioners.
  • Target Google Cloud Project: Project where the Data Foundation predefined data models will be deployed and accessed by end-users. This may or may not be different from the source project depending on your needs.
  • Source BigQuery Dataset (Raw): BigQuery dataset where the source data is replicated to or where the test data will be created. The recommendation is to have separate datasets, one for each data source (e.g., one raw dataset for SAP and one raw dataset for Google Ads).
  • CDC BigQuery Dataset: BigQuery dataset where the CDC processed data lands the latest available records. This may or may not be the same as the source dataset if the tool landing the data performs the CDC merge operation. Some workloads allow for field name mapping here. The recommendation is to have two separate CDC datasets, one for each source (i.e., one cdc dataset for SAP and one CDC dataset for Salesforce).
  • Target BigQuery reporting dataset: BigQuery dataset where the Data Foundation predefined data models will be deployed. The recommendation is to have two separate reporting datasets, one for each source (i.e., one reporting dataset for SAP and one reporting dataset for Salesforce). This dataset is automatically created by the deployer if it does not exist.
  • Target BigQuery machine learning dataset: BigQuery dataset where the BQML predefined models will be deployed. This dataset is automatically created by the deployer if it does not exist.
  • Pre-processing K9 dataset: BigQuery dataset where cross-workload, reusable DAG components, such as time dimensions, can be deployed. The workloads will have a dependency on this dataset unless modified. This dataset is automatically created by the deployer if it does not exist.
  • Post-processing K9 dataset: BigQuery dataset where cross-workload reporting (e.g. SAP + Google Ads reporting for CATGAP) and additional external source DAGs (e.g. Google Trends ingestion) can be deployed. This dataset is automatically created by the deployer if it does not exist.

Alternatively, if you do not have a replication tool set up or do not wish to use the replicated data, the deployment process can generate test tables and fake data for you. You will still need to create and identify the CDC and Raw datasets ahead of time.

These parameters will be different for each workload depending on the integration mechanism.

Establish integration mechanism

Each workload has different ways to integrate with BigQuery. This section explains the options for each source.

Note: Using test data? You can skip this section.

Configure K9 deployments

The K9 deployer is responsible for ingestion, processing and modeling of components that are reusable across different data sources. For example, the time dimension is reusable across all data sources where tables may need to slice and dice analytical results based on a Gregorian calendar.

External data sources that can be combined across different workloads to gain enriched insights and are also reusable are part of the K9 deployer. For example, Weather data or Google Trends ingestion and processing can be combined across SAP, Salesforce and Marketing data sources. CATGAP is an example which combines Google Ads and SAP.

K9 sample deployment

The pre-processing K9 step executes before all workloads start their deployment, so the reusable models are available during their deployment. Conversely, the post-processing K9 executes after all modules have deployed their reporting models so the cross-workload reporting or augmenting models find their dependencies within each reporting dataset.

ℹ️ Fun Fact: The K9 receives its name because it's where the DAGs (/dɑɡz/) live.

The DAGs and models to be generated can be configured in K9 configuration file.

If you are using submodules independently (e.g. only cortex-reporting) or a deployment framework like Dataform or dbt, we recommend porting the pre-k9 DAG execution into your scheduler of choice. The Reporting views that tap into reusable models select from the separate K9 datasets.

Configure Google Cloud Platform components

Enable required components

The following Google Cloud components are required:

  • Google Cloud Project
  • BigQuery instance and datasets
  • Service Account with Impersonation rights
  • Cloud Storage Buckets
  • Cloud Build API
  • Cloud Resource Manager API
  • Optional components:
    • Cloud Composer for change data capture (CDC) processing, hierarchy flattening (SAP only), and data replication (Non-SAP only) through Directed Acyclic Graphs (DAGs). To set up an instance of Cloud Composer review the documentation.
    • Looker (optional) connects to reporting templates. Requires manual setup
    • Analytics Hub linked datasets (optional) are currently used for some external sources, such as the Weather DAG. You may choose to fill this structure with any other available source of your choice for advanced scenarios.
    • Dataflow: Integration tool for many of the Marketing data sets like Google Ads.
    • Dataplex: Used for building a Data Mesh, view documentation.

From the Cloud Shell, you can enable Google Cloud Services using the gcloud command line interface in your Google Cloud project.

Replace the <SOURCE_PROJECT> placeholder with your source project. Copy and paste the following command into the cloud shell:

gcloud config set project <SOURCE_PROJECT>

gcloud services enable bigquery.googleapis.com \
                       cloudbuild.googleapis.com \
                       composer.googleapis.com \
                       storage-component.googleapis.com \
                       cloudresourcemanager.googleapis.com \
                       dataflow.googleapis.com

You should get a success message:

success message in console

Grant permissions to the executing user

If an individual is executing the deployment with their own account, they will need, at minimum, the following permissions in the project where Cloud Build will be triggered:

  • Service Usage Consumer
  • Storage Object Viewer for the Cloud Build default bucket or bucket for logs
  • Object Writer to the output buckets
  • Cloud Build Editor
  • Project Viewer or Storage Object Viewer

These permissions may vary depending on the setup of the project. Consider the following documentation if you run into errors:

Configure the Cloud Build account

In the source project, navigate to the Cloud Build and locate the account that will execute the deployment process.

cloud build service account

Locate the build account in IAM (make sure it says cloudbuild):

Cloud build service account in IAM

Grant the following permissions to the Cloud Build service account in both the source and target projects if they are different:

  • BigQuery Data Editor
  • BigQuery Job User

[Optional] If changing the default values for Data Mesh in config/config.json to implement features beyond descriptions, the executing account (Cloud Build service account) will need to have the following permissions:

  • Policy Tag Admin
  • Data Catalog TagTemplate Owner
  • Dataplex Editor
  • BigQuery Data Owner
[Optional] Create a Service Account for deployment with impersonation

The deployment can run through a service account with impersonation rights, by adding the flag --impersonate-service-account. This service account will trigger a Cloud Build job that will in turn run specific steps through the Cloud Build service account. This allows a user to trigger a deployment process without direct access to the resources.

The impersonation rights to the new, triggering service account need to be granted to the person running the command.

Navigate to the Google Cloud Platform Console and follow the steps to create a service account with the following role:

  • Cloud Build Service Account

This role can be applied during the creation of the service account:

Cloud Build Svc account

Authorize the ID of user who will be running the deployment to impersonate the service account that was created in the previous step. Authorize your own ID so you can run an initial check as well.

Authorize impersonation

Once the service account has been created, navigate to the IAM Service Account administration, click on the service account, and into the Permissions tab.

Click Grant Access, type in the ID of the user who will execute the deployment and has impersonation rights, and assign the following role:

  • Service Account Token Creator

Alternatively, you can complete this step from the Cloud Shell:

gcloud iam service-accounts create <SERVICE ACCOUNT> \
    --description="Service account for Cortex deployment" \
    --display-name="my-cortex-service-account"

gcloud projects add-iam-policy-binding <SOURCE_PROJECT> \
--member="serviceAccount:<SERVICE ACCOUNT>@<SOURCE_PROJECT>.iam.gserviceaccount.com" \
--role="roles/cloudbuild.builds.editor"

gcloud iam service-accounts add-iam-policy-binding <SERVICE ACCOUNT>\
  --member="user:<EXECUTING USER EMAIL>" \
  --role="roles/iam.serviceAccountTokenCreator"

Create a Storage bucket for storing DAG related files

A storage bucket will be required to store processing DAG scripts and other temporary files generated during deployment. These scripts will have to be manually moved into a Cloud Composer or Apache Airflow instance after deployment.

Navigate to Cloud Storage and create a bucket in the same region as your BigQuery datasets.

Alternatively, you can use the following command to create a bucket from the Cloud Shell:

gsutil mb -l <REGION/MULTI-REGION> gs://<BUCKET NAME>

Navigate to the Permissions tab. Grant Storage Object Creator to the user executing the Build command or to the Service account you created for impersonation.

Create a Storage bucket for logs

You can create a specific bucket for the Cloud Build process to store the logs. This is useful if you want to restrict data that may be stored in logs to a specific region. Create a GCS bucket with uniform access control, in the same region where the deployment will run.

Alternatively, here is the command line to create this bucket:

gsutil mb -l <REGION/MULTI-REGION> gs://<BUCKET NAME>

You will need to grant Object Admin permissions to the Cloud Build service account.

Configure Data Mesh

The default configuration in config/config.json for Data Mesh will add the descriptions for all assets, but will not create any other structures or policies.

Enabling any other options (e.g., deployLakes, deployCatalog, deployACLs) requires the configuration of permissions for the Cloud Build service account and the necessary configuration of the yaml files.

Here is a high-level diagram of the available options:

data mesh structure for lakes zones and catalog

You will find detailed instructions and examples to configure Data Mesh in the documentation.

Configure deployment

The behavior of the deployment is controlled by the configuration file config.json.

The file contains global configuration, configuration specific to each workload and optional Credly badger service.

Config JSON

Open the file in config/config.json. From the Cloud Shell:

edit config/config.json

Global deployment configuration

Consider your target deployment:

structure for parameters

Parameter Meaning Default Value Description
testData Deploy Test Data true Project where the source dataset is and the build will run.
deploySAP Deploy SAP true Execute the deployment for SAP workload (ECC or S/4HANA).
deploySFDC Deploy Salesforce true Execute the deployment for Salesforce workload.
deployMarketing Deploy Marketing true Execute the deployment for Marketing sources (Google Ads, CM360 and/or TikTok).
deployDataMesh Deploy Data Mesh true Execute the deployment for Data Mesh, see the Data Mesh README for more info.
turboMode Deploy in Turbo mode true Execute all views builds as a step in the same Cloud Build process, in parallel for a faster deployment. If set to false, each reporting view is generated in its own sequential build step. We recommend only setting it to true when using test data or after any mismatch between reporting columns and the source data have been resolved.
projectIdSource Source Project ID - Project where the source dataset is and the build will run.
projectIdTarget Target Project ID - Target project for user-facing datasets (reporting and ML datasets).
targetBucket Target Bucket to storage generated DAG scripts - Bucket created previously where DAGs (and Dataflow temp files) will be generated. Avoid using the actual Airflow bucket.
location Location or Region "US" Location where the BigQuery dataset and GCS buckets are.

> Note: See restrictions listed under BigQuery dataset locations.
languages Filtering languages [ "E", "S" ] If not using test data, enter a single language (e.g., [ "E" ]) or multiple languages (e.g., [ "E", "S" ]) as relevant to your business. These values are used to replace placeholders in SQL in analytics models where available (SAP only for now - see the ERD).
currencies Filtering currencies [ "USD" ] If not using test data, enter a single currency (e.g., [ "USD" ]) or multiple currencies (e.g., [ "USD", "CAD" ]) as relevant to your business. These values are used to replace placeholders in SQL in analytics models where available (SAP only for now - see the ERD).
testDataProject Source for test harness kittycorn-public Source of the test data for demo deployments. Applies when testData is true.

> Note: Unless you have your own test harness, do not change this value.
k9.datasets.processing K9 datasets - Processing "K9_PROCESSING" Execute cross-workload templates (e.g., date dimension) as defined in the K9 configuration file. These templates are normally required by the downstream workloads.
k9.datasets.reporting K9 datasets - Reporting "K9_REPORTING" Execute cross-workload templates and external data sources (e.g., Weather) as defined in the K9 configuration file. Commented out by default.
DataMesh.deployDescriptions Data Mesh - Asset descriptions true Deploy BigQuery asset schema descriptions.
DataMesh.deployLakes Data Mesh - Lakes & Zones false Deploy Dataplex Lakes & Zones that organize tables by processing layer, requires configuration before enabling.
DataMesh.deployCatalog Data Mesh - Catalog Tags & Templates false Deploy Data Catalog Tags that allow custom metadata on BigQuery assets or fields, requires configuration before enabling.
DataMesh.deployACLs Data Mesh - Access Control false Deploy asset, row, or column level access control on BigQuery assets, requires configuration before enabling.

Workload-specific configuration

The following sections are specific to each workload. You do not need to configure them if the deployment parameter (e.g., deploySAP or deployMarketing) for the workload is set to False.

[Optional] Configure K9 external datasets

Configure SAP hierarchies

You can use the configuration in the file sets.yaml if you need to generate scripts to flatten hierarchies. See the Appendix - Configuring the flattener for options. This step is only executed if the CDC generation flag is set to true.

Configure external datasets for K9

Some advanced use cases may require external datasets to complement an enterprise system of record such as SAP. In addition to external exchanges consumed from Analytics hub, some datasets may need custom or tailored methods to ingest data and join them with the reporting models.

Note: You will need to configure the DAGs as follows:

  1. Holiday Calendar: This DAG retrieves the holiday calendars from PyPi Holidays. You can adjust the list of countries and years to retrieve holidays, as well as parameters of the DAG from the file holiday_calendar.ini. Leave the defaults if using test data.

  2. Product Hierarchy Texts: This DAG flattens materials and their product hierarchies. The resulting table can be used to feed the Trends list of terms to retrieve Interest Over Time. You can adjust the parameters of the DAG from the file prod_hierarchy_texts.py. Leave the defaults if using test data. You will need to adjust the levels of the hierarchy and the language under the markers for ## CORTEX-CUSTOMER:. If your product hierarchy contains more levels, you may need to add an additional SELECT statement similar to the CTE h1_h2_h3.

  3. Trends: This DAG retrieves Interest Over Time for a specific set of terms from Google Search trends. The terms can be configured in trends.ini. You will need to adjust the time frame to 'today 7-d' in trends.py after an initial run. We recommend getting familiarized with the results coming from the different terms to tune parameters. We also recommend partitioning large lists to multiple copies of this DAG running at different times. For more information about the underlying library being used, see Pytrends.

  4. Weather: By default, this DAG uses the publicly available test dataset bigquery-public-data.geo_openstreetmap.planet_layers. The query also relies on an NOAA dataset only available through Analytics Hub, noaa_global_forecast_system. This dataset needs to be created in the same region as the other datasets prior to executing deployment. If the datasets are not available in your region, you can continue with the following instructions and follow additional steps to transfer the data into the desired region.

    You can skip this configuration if using test data.

    • Navigate to BigQuery > Analytics Hub
    • Click Search Listings. Search for "NOAA Global Forecast System"
    • Click Add dataset to project. When prompted, keep "noaa_global_forecast_system" as the name of the dataset. If needed, adjust the name of the dataset and table in the FROM clauses in weather_daily.sql.
    • Repeat the listing search for Dataset "OpenStreetMap Public Dataset".
    • Adjust the FROM clauses containing bigquery-public-data.geo_openstreetmap.planet_layers in postcode.sql.

    Analytics hub is currently only supported in EU and US locations and some datasets, such as NOAA Global Forecast, are only offered in a single multilocation. If you are targeting a location different from the one available for the required dataset, we recommend creating a scheduled query to copy the new records from the Analytics hub linked dataset followed by a transfer service to copy those new records into a dataset located in the same location or region as the rest of your deployment. You will then need to adjust the SQL files .

    Important Note: Before copying these DAGs to Cloud Composer, you will need to add the required python modules (holidays, pytrends) as dependencies.

  5. Sustainability & ESG insights. Cortex Data Framework combines SAP supplier performance data with advanced ESG insights to compare delivery performance, sustainability & risks more holistically across global operations. For more details, see guide here.

[Optional] Credly badger configuration

The following section enables the Credly badger service, which will award a Cortex deployment badge to you after a successful deployment.

All configuration items including a valid email address must be provided for the badge to be granted.

Parameter Meaning Default Value Description
shareWithCredly Opt in Credly badger service false Call Credly service to assign a badge after a successful deployment.
userInfo.email User's email - Provide email for the Credly badger service.
userInfo.firstName User's first Name - Provide first name for the Credly badger service.
userInfo.lastName User's last Name - Provide last name for the Credly badger service.

Check for CORTEX_CUSTOMER tags

Many SAP and Salesforce customers will have specific customizations of their systems, such as additional documents in a flow or specific types of a record. These are specific to each customer and configured by functional analysts as the business needs arise. The spots on the SQL code where these specific enhancements could be done are marked with a comment starting with ## CORTEX-CUSTOMER. You can check for these comments after cloning the repository with a command like:

grep -R CORTEX-CUSTOMER

Note: There may be additional customizations depending on the source systems. We recommend getting the business users or analysts involved early in the process to help spot these.

Performance optimization for Reporting views

Reporting artifacts can be created as views or as tables refreshed regularly through DAGs. On the one hand, views will compute the data on each execution of a query, which will keep the results always fresh. On the other hand, the table will run the computations once, and the results can be queried multiple times without incurring higher computing costs and achieving faster runtime. This balance is very specific to each customer, so we encourage each customer to create their own configuration to serve their needs.

Materialized results are updated into a table. These tables can be further fine-tuned by adding Partitioning and Clustering properties to these tables.

The configuration files for each workload are:

Data Source Settings files
SAP src/SAP/SAP_REPORTING/reporting_settings_ecc.yaml
Salesforce Sales Cloud src/SFDC/config/reporting_settings.yaml
Marketing - Google Ads src/marketing/src/GoogleAds/config/reporting_settings.yaml
Marketing - CM360 src/marketing/src/CM360/config/reporting_settings.yaml
Marketing - Meta src/marketing/src/Meta/config/reporting_settings.yaml
Marketing - Salesforce Marketing Cloud src/marketing/src/SFMC/config/reporting_settings.yaml
Marketing - TikTok src/marketing/src/TikTok/config/reporting_settings.yaml

Customizing reporting_settings file configuration

This settings file drives how the BQ objects (tables or views) for Reporting datasets are created.

There are two sections:

  1. bq_independent_objects: All BigQuery objects that can be created independently, without any other dependencies.

    Note: When Turbo mode is enabled, these BQ objects are created in parallel during the deployment time, speeding up the deployment process.

  2. bq_dependent_objects: All BigQuery objects that need to be created in a specific order due to dependencies on other BQ objects.

    Note: Turbo mode does not apply to this section - i.e., each object will be created one after another.

The deployer will first create all the BQ Objects listed in bq_independent_objects first, and then all the objects listed in bq_dependent_objects.

Following properties need to be defined for each object:

  1. sql_file: Name of the sql file that will create a given object.

  2. type: Type of BQ Object. Possible values:

    • view : If we want the object to be a BQ view.
    • table: If we want the object to be a BQ table.
    • script: This is to create other types of objects (BQ Functions, Stored Procs etc)
  3. If type is 'table', then following optional properties can be defined:

    • load_frequency: Frequency at which a Composer DAG will be executed to refresh this table. Mandatory. See Airflow documentation for details on possible values.
    • partition_details: How the table should be partitioned. Optional. See Appendix section Table Partition and Cluster Settings for details on how to configure this.
    • cluster_details: How the table should be clustered. Optional. See Appendix section Table Partition and Cluster Settings for details on how to configure this.

Execute deployment

This step requires config.json to be configured as described in the section Configure Deployment.

Run the Build command with the target log bucket.

gcloud builds submit --project <execution project, likely the source> \
    --substitutions=_GCS_BUCKET=<Bucket for logs - Cloud Build Service Account needs access to write here>

You can follow the main Build process from the first link of logs:

Click on Logs are available at

Alternatively, if you have enough permissions, you can see the progress from Cloud Build.

build executing

Each build step will trigger child build steps. These can be tracked from the Cloud Build console:

build executing

Or finding the logs for the child build within the log from a step:

build executing

And identify any issues with individual builds:

SQL error

We recommend pasting the generated SQL into BigQuery to identify and correct the errors more easily. Most errors will be related to fields that are selected but not present in the replicated source. The BigQuery UI will help identify and comment those out.

Move the files into the DAG bucket

If you opted to generate integration or CDC files and have an instance of Airflow, you can move them into their final bucket with the following command:

gsutil -m cp -r  gs://<output bucket>/dags/ gs://<composer dag bucket>/
gsutil -m cp -r  gs://<output bucket>/data/ gs://<composer sql bucket>/

Test, customize and prepare for upgrade

In addition to the CORTEX-CUSTOMER tags, you may need to further customize the content to add business rules, add other datasets and join them with existing views or tables, reuse the provided templates to call additional APIs, modify deployment scripts, apply further data mesh concepts, etc. You may also need to slightly adapt some tables or landed APIs to include additional fields not included in our standard. We recommend committing all of these changes with a clear tag in the code to your own fork or clone of our git repositories.

We recommend adopting a CICD pipeline that works for your organization, to keep these enhancements tested and your overall solution in a reliable, robust state. A simple pipeline can reuse our cloudbuild*.yaml scripts to trigger end to end deployment periodically, or based on git operations depending on your repository of choice by automating builds. Using automated testing with your own sample data will help ensure the models always produce what you expect every time someone commits a change. The config.json file plays an important role in defining different sets of projects and datasets for development, staging and production environments.

Tagging your own changes visibly in your fork or clone of a repository together with some deployment and testing automation will be very helpful when performing upgrades. Check out this guide for upgrade instructions.

Next steps

Looker deployment

Instructions for deploying the pre-built Looker blocks can be found here. Please follow the individual block READMEs for detailed instructions on required dependencies to ensure successful deployment.

Optionally, you may also customize the blocks by forking the GitHub repositories into your own Looker project. Instructions can be found here.

Available blocks for the Data Foundation include the following:

Operational Dashboards

Marketing Dashboards

Sustainability Dashboards

Demand Sensing

You can deploy the Demand Sensing use case from the Marketplace. Learn more from the documentation.

Application Layer

Deploy a sample micro-services based application through the Google Cloud Marketplace.

Customizations and upgrades

We strongly encourage you to fork this repository and apply your changes to the code in your own fork. You can make use of the delivered deployment scripts in your development cycles and incorporate your own test scripts as needed. When a new release is published, you can compare the new release from our repository with your own changes by merging our code into your own fork in a separate branch. Suggestions for changes or possible customizations in the code are flagged with the comment ## CORTEX-CUSTOMER. We recommend listing these after the initial deployment.

Enable TURBO mode

For your own customizations and a faster deployment in your own development pipelines, you can use the turboMode variable in config/config.json. When set to true, the deployment process will dynamically generate a cloudbuild.views.yaml file with each view in the right dependencies file (e.g., dependencies_ecc.txt or dependencies_s4.txt) as a single step of the build. This allows for a 10x faster deployment. The limitation is that if an error occurs when deploying a view, the build process will stop, and this is a slow way to find out all possible mismatches between the Cortex views and your datasource when compared to the turboMode=false option. If you are still fixing potential structure mismatches between the SELECT clauses in the views and the fields available in your replicated tables, TURBO=false will take longer but will attempt to generate all views even if one fails. This will help identify and fix more errors in a single run.

Support

To file issues and feature requests against these models or deployers, create an issue in this repo. For help gathering the information required to help troubleshoot, execute support.sh from the cloned directory and follow instructions.

Appendix

Understanding Change Data Capture

Replicating raw data from SAP

The goal of the Data Foundation is to expose data and analytics models for reporting and applications. The models consume the data replicated from an SAP ECC or SAP S/4HANA system using a preferred replication tool, like those listed in the Data Integration Guides for SAP.

Data from SAP ECC or S/4HANA is expected to be replicated in raw form, that is, with the same structure as the tables in SAP and without transformations. The names of the tables in BigQuery should be lower case for cortex data model compatibility reasons.

For example, fields in table T001 are replicated using their equivalent data type in BigQuery, without transformations:

alt_text

Change Data Capture (CDC) Processing

BigQuery is an append preferred database. This means that the data is not updated or merged during replication. For example, an update to an existing record can be replicated as the same record containing the change. To avoid duplicates, a merge operation needs to be applied afterwards. This is referred to as Change Data Capture processing.

The Data Foundation for SAP includes the option to create scripts for Cloud Composer or Apache Airflow to merge or “upsert” the new records resulting from updates and only keep the latest version in a new dataset. For these scripts to work the tables need to have a field with an operation flag named operation_flag **(I = insert, U = update, D = delete) **and a timestamp named recordstamp.

For example, the following image shows the latest records for each partner record, based on the timestamp and latest operation flag:

alt_text

Dataset structure

Data from SAP or Salesforce is replicated into a BigQuery dataset -the source or replicated dataset- and the updated or merged results are inserted into another dataset- the CDC dataset. The reporting views select data from the CDC dataset, to ensure the reporting tools and applications always have the latest version of a table.

The following flow depicts the CDC processing for SAP, dependent on the operational_flag and recordstamp.

Replication with recordstamp and operation flag merged into cdc processed

The following flow depicts the integration from APIs into Raw and CDC processing for Salesforce, dependent on the Id and SystemModStamp fields produced by Salesforce APIs.

Replication with system modstamp and id mapped into cortex schema and merged into cdc processed

Some replication tools can merge or upsert the records when inserting them into BigQuery, so the generation of these scripts is optional. In this case, the setup will only have a single dataset. The REPORTING dataset will fetch updated records for reporting from that dataset.

Optional - Using different projects to segregate access

Some customers choose to have different projects for different functions to keep users from having excessive access to some data. The deployment allows for using two projects, one for processing replicated data, where only technical users have access to the raw data, and one for reporting, where business users can query the predefined models or views.

alt_text

Using two different projects is optional. A single project can be used to deploy all datasets.

Setting up CDC processing

During deployment, you can choose to merge changes in real time using a view in BigQuery or scheduling a merge operation in Cloud Composer (or any other instance of Apache Airflow).

Cloud Composer can schedule the scripts to process the merge operations periodically. Data is updated to its latest version every time the merge operations execute, however, more frequent merge operations translate into higher costs.

alt_text

The scheduled frequency can be customized to fit the business needs.

You will notice the file uses scheduling supported by Apache Airflow.

The following example shows an extract from the configuration file:

data_to_replicate:
  - base_table: adrc
    load_frequency: "@hourly"
  - base_table: adr6
    target_table: adr6_cdc
    load_frequency: "@daily"

This configuration will:

  1. Create a copy from source\_project\_id.REPLICATED\_DATASET.adrc into target\_project\_id.DATASET\_WITH\_LATEST\_RECORDS.adrc if the latter does not exist
  2. Create a CDC script in the specified bucket
  3. Create a copy from source\_project\_id.REPLICATED\_DATASET.adr6 into target\_project\_id.DATASET\_WITH\_LATEST\_RECORDS.adr6\_cdc if the latter does not exist
  4. Create a CDC script in the specified bucket

SAP only: If you want to create DAGs or runtime views to process changes for tables that exist in SAP and are not listed in the file, add them to this file before deployment. For example, the following configuration creates a CDC script for custom table “zztable_customer” and a runtime view to scan changes in real time for another custom table called “zzspecial_table”:

  - base_table: zztable_customer
    load_frequency: "@daily"
  - base_table: zzspecial_table
    load_frequency: "RUNTIME"

This will work as long as the table DD03L is replicated in the source dataset and the schema of the custom table is present in that table.

Sample generated template

The following template generates the processing of changes. Modifications, such as the name of the timestamp field, or additional operations, can be done at this point:

MERGE `${target_table}` T
USING (SELECT * FROM `${base_table}` WHERE recordstamp > (SELECT IF(MAX(recordstamp) IS NOT NULL, MAX(recordstamp),TIMESTAMP("1940-12-25 05:30:00+00")) FROM `${target_table}`)) S
ON ${p_key}
WHEN MATCHED AND S.operation_flag='D' AND S.is_deleted = true THEN
  DELETE
WHEN NOT MATCHED AND S.operation_flag='I' THEN
  INSERT (${fields})
  VALUES
  (${fields})
WHEN MATCHED AND S.operation_flag='U' THEN
UPDATE SET
    ${update_fields}

SAP Only: Alternatively, if your business requires near-real time insights and the replication tool supports it, the deployment tool accepts the option RUNTIME. This means a CDC script will not be generated. Instead, a view will scan and fetch the latest available record at runtime for immediate consistency.

CDC fields required for MERGE operations

The following parameters will be required for the automated generation of change-data-capture batch processes:

  • Source project + dataset: Dataset where the SAP data is streamed or replicated. For the CDC scripts to work by default, the tables need to have a timestamp field (called recordstamp) and an operation field with the following values, all set during replication:
    • I: for insert
    • U: for update
    • D: for deletion
  • Target project + dataset for the CDC processing: The script generated by default will generate the tables from a copy of the source dataset if they do not exist.
  • Replicated tables: Tables for which the scripts need to be generated
  • Processing frequency: Following the Cron notation, how frequently the DAGs are expected to run
  • Target GCS bucket where the CDC output files will be copied
  • The name of the connection used by Cloud Composer
  • Optional: If the result of the CDC processing will remain in the same dataset as the target, you can specify the name of the target table.

Gathering Cloud Composer Settings

If Cloud Composer is available, create connection(s) to the Source Project in Cloud Composer.

Please create connections with the following names for DAG execution, based on the types of deployments below.

NOTE: If you are creating tables in the Reporting layer, please make sure to create separate connections for Reporting DAGs.

If you are deploying... Create for Raw Create for CDC Create for Reporting
SAP N/A sap_cdc_bq sap_reporting_bq
SFDC sfdc_cdc_bq (*) sfdc_cdc_bq sfdc_reporting_bq
Google Ads googleads_raw_dataflow googleads_cdc_bq googleads_reporting_bq
CM360 cm360_raw_dataflow cm360_cdc_bq cm360_reporting_bq
TikTok tiktok_raw_dataflow tiktok_cdc_bq tiktok_reporting_bq
LiveRamp N/A liveramp_cdc_bq N/A

* SFDC Raw Ingestion module uses the same Airflow connection as SFDC CDC module.

Please refer to sections for each individual data sources for details.

Notes:

  • Connection name suffixes indicate their intended usage. _bq are meant for BigQuery access, while _dataflow are meant to run Google Cloud DataFlow jobs.
  • You only need to create connections for Raw if you are using the ingestion modules provided by Cortex.
  • If you are deploying multiple data sources, for example both SAP and Salesforce, we recommend creating all connections assuming security limitations will be applied to separate service accounts. Alternatively, modify the name of the connection in the template prior to deployment to use the same connection to write into BigQuery as shown below.
  • We do not recommend using the default connections and service accounts in Airflow, specially in production environments. This recommendation is to comply with the principle of least privilege
  • If you have Secret Manager Backend enabled for Airflow, you can also create these connections within Secret Manager under the same name. Connections in Secret Manager takes precedence over connections created in Airflow.

The GCS bucket structure for some of the template DAG expects the folders to be in /data/bq_data_replication. You can modify this path prior to deployment.

alt_text

If you do not have an environment of Cloud Composer available yet, you can create one afterwards and move the files into the DAG bucket.

Note: The scripts that process data in Airflow or Cloud Composer are purposefully generated separately from the Airflow-specific scripts. This allows you to port those scripts to another tool of choice.

(SAP Only) Configuring the flattener for SAP hierarchies

The deployment process can optionally flatten hierarchies represented as sets (e.g. transaction GS03) in SAP. The process can also generate the DAGs for these hierarchies to be refreshed periodically and automatically. This process requires configuration prior to the deployment and should be known by a Financial or Controlling consultant or power user.

This video explains how to perform the configuration to flatten hierarchies.

The deployment file takes the following parameters:

  • Name of the set
  • Class of the set (as listed by SAP in standard table SETCLS)
  • Organizational Unit: Controlling Area or additional key for the set
  • Client or Mandant
  • Reference table for the referenced master data
  • Reference key field for master data
  • Additional filter conditions (where clause)

The following are examples of configurations for Cost Centers and Profit Centers including the technical information. If unsure about these parameters, consult with a Finance or Controlling SAP consultant.

sets_data:
#Cost Centers:
# table: csks, select_fields (cost center): 'kostl', where clause: Controlling Area (kokrs), Valid to (datbi)
- setname: 'H1'
  setclass: '0101'
  orgunit: '1000'
  mandt:  '800'
  table: 'csks'
  key_field: 'kostl'
  where_clause: [ kokrs = '1000', datbi >= cast('9999-12-31' as date)]
  load_frequency: "@daily"
#Profit Centers:
# setclass: 0106, table: cepc, select_fields (profit center): 'cepc', where clause: Controlling Area (kokrs), Valid to (datbi)
- setname: 'HE'
  setclass: '0106'
  orgunit: '1000'
  mandt:  '800'
  table: 'cepc'
  key_field: 'prctr'
  where_clause: [ kokrs = '1000', datbi >= cast('9999-12-31' as date) ]
  load_frequency: "@monthly"
#G/L Accounts:
# table: ska1, select_fields (GL Account): 'saknr', where clause: Chart of Accounts (KTOPL), set will be manual. May also need to poll Financial Statement versions.

This configuration will generate two separate DAGs. For example, if there were two configurations for Cost Center hierarchies, one for Controlling Area 1000 and one for 2000, the DAGs would be 2 different files and separate processes but the target, flattened table would be the same.

Important: If re-running the process and re-initializing the load, make sure the tables are truncated. The CDC and initial load processes do not clear the contents of the tables which means the flattened data will be inserted again.

Configuring Salesforce to BigQuery extraction module

These are the generic steps to use the Salesforce to BigQuery extraction module provided by Data Foundation. Your requirements and flow may vary depending on your system and existing configuration. You can alternatively use other available tools.

Setting up required credentials & Connected App in Salesforce

You need to login as an administrator to your Salesforce instance to complete the following:

  1. Create or identify a profile in Salesforce that
    • Is granted permission for Apex REST Services & API Enabled under System Permissions.
    • Is granted View All permission for all objects that you would like to replicate. For example, Account, Cases, etc. Check for restrictions or issues with your security administrator.
    • Is ideally not granted any permissions related to user interface login like Salesforce Anywhere in Lightning Experience,Salesforce Anywhere on Mobile,Lightning Experience User,Lightning Login User & etc. Check for restrictions or issues with your security administrator.
  2. Create or use identify existing user in Salesforce. You need to know the user's user name, password, and security token.
    • This should ideally be a user dedicated to execute this replication.
    • The user should be assigned to the profile you have created or identified in Step 1.
    • You can see User Name and reset Password here.
    • You can reset the security token if you do not have it and it is not used by another process.
  3. Create a Connected App. It will be the only communication channel to establish connection to Salesforce from the external world with the help of profile, Salesforce API, standard user credentials and its security token.
    • Follow the instructions to enable OAuth Settings for API Integration.
    • Make sure Require Secret for Web Server Flow and Require Secret for Refresh Token Flow are enabled in API (Enabled OAuth Settings) section.
    • See the documentation on how to get your consumer key (which will be later used as your Client ID). Check with your security administrator for issues or restrictions.
  4. Assign your Connected App created in Step 3 to the profile created in Step 1.
    • Select Setup from the top right of the Salesforce home screen.
    • In the Quick Find box, enter profile, then select Profile. Search for the profile created / identified in Step 1.
    • Open the profile.
    • Click the Assigned Connected Apps link.
    • Click Edit.
    • Add the newly created Connected App from Step 3.
    • Click on the Save button.

Note down User Name, Password, Secret Token and Client ID from steps above.

Setting up Google Cloud Secret Manager

The Salesforce-to-BigQuery module relies on Google Cloud Secret Manager to work. This process is thoroughly documented in the documentation for Cloud Composer

Please create a secret as follows:

Secret Name:

airflow-connections-salesforce-conn

Secret Value:

http://<username>:<password>@https%3A%2F%2F<instance-name>.lightning.force.com?client_id=<client_id>&security_token=<secret-token>

Where User Name, Password, Secret Token and Client ID were noted from the steps above.

See the documentation on how to find your Instance Name.

Cloud Composer libraries for Salesforce replication

To execute the Python scripts in the DAGs provided by the Data Foundation, you need to install some dependencies.

For Airflow version 1.10, follow the documentation to install the following packages, in order:

tableauserverclient==0.17
apache-airflow-backport-providers-salesforce==2021.3.3

For Airflow version 2.x, follow the documentation to install apache-airflow-providers-salesforce~=5.2.0.

Here is a command to install each required package:

$ gcloud composer environments update <ENVIRONMENT_NAME> \
    --location <LOCATION> \
     --update-pypi-package <PACKAGE_NAME><EXTRAS_AND_VERSION>

For example,

$ gcloud composer environments update my-composer-instance \
    --location us-central1 \
     --update-pypi-package apache-airflow-backport-providers-salesforce>=2021.3.3

Enable Secret Manager as a backend

Enable Google Secret Manager as the security backend. See details here.

Allow the Composer service account to access secrets

Make sure your Composer service account (default: GCE service account) has Secret Manager Secret Accessor permission. See details in the access control documentation.

BigQuery connection in Airflow

Make sure to create the connection sfdc_cdc_bq according to instructions.

Table Partition and Cluster Settings

For certain settings files (e.g. SAP CDC settings file cdc_settings.yaml or all Reporting settings yaml file reporting_settings.yaml) provide a way to create materialized tables with clusters or partitions of your choice. This is controlled by the following properties in the settings file:

Table Partitioning

Partition can be enabled by specifying partition_details:

Example:

   - base_table: vbap
     load_frequency: "@daily"
     partition_details: {
       column: "erdat", partition_type: "time", time_grain: "day"
     }

Use following parameters to control partitioning details for a given table:

Property Description Value
column Column by which the CDC table will be partitioned Column name
partition_type Type of Partition "time" for time based partition (More details)
"integer_range" for integer based partition (More details)
time_grain Time part to partition with
Required when partition_type = "time"
"hour", "day", "month" OR "year"
integer_range_bucket Bucket range
Required when partition_type = "integer_range"
"start" = Start value
"end" = End value
"interval" = Interval of range

NOTE See BigQuery Table Partition documentation details to understand these options and related limitations.

Table Clustering

Similarly, clustering can be by specifying cluster_details:

Example:

   - base_table: vbak
     load_frequency: "@daily"
     cluster_details: {columns: ["vkorg"]}
Property Description Value
columns Columns by which a table will be clustered List of column names
e.g. ["mjahr", "matnr"]

NOTE: See BigQuery Table Cluster documentation details to understand these options and related limitations.

License

This source code is licensed under Apache 2.0. Full license text is available in LICENSE.