Skip to content

Latest commit

 

History

History
120 lines (91 loc) · 6.49 KB

README.md

File metadata and controls

120 lines (91 loc) · 6.49 KB

Apache License

Salesforce (docs)

This package models Salesforce data from Fivetran's connector. It uses data in the format described by this ERD.

This package enriches your Fivetran data by doing the following:

  • Adds descriptions to tables and columns that are synced using Fivetran
  • Adds freshness tests to source data
  • Adds column-level testing where applicable. For example, all primary keys are tested for uniqueness and non-null values.
  • Models staging tables, which will be used in our transform package

Models

This package contains staging models, designed to work simultaneously with our Salesforce transform package. The staging models:

  • Remove any rows that are soft-deleted
  • Name columns consistently across all packages:
    • Boolean fields are prefixed with is_ or has_
    • Timestamps are appended with _at
    • ID primary keys are prefixed with the name of the table. For example, the user table's ID column is renamed user_id.

Installation Instructions

Check dbt Hub for the latest installation instructions, or read the dbt docs for more information on installing packages.

Include in your packages.yml

packages:
  - package: fivetran/salesforce_source
    version: [">=0.4.0", "<0.5.0"]

Configuration

By default, this package will run using your target database and the salesforce schema. If this is not where your Salesforce data is (perhaps your Salesforce schema is salesforce_fivetran), add the following configuration to your dbt_project.yml file:

# dbt_project.yml

...
vars:
  salesforce_database: your_database_name
  salesforce_schema: your_schema_name

Adding Passthrough Columns

This package includes all source columns defined in the generate_columns.sql macro. To add additional columns to this package, do so using our pass-through column variables. This is extremely useful if you'd like to include custom fields to the package.

# dbt_project.yml

...
vars:
  account_pass_through_columns: [account_custom_field_1, account_custom_field_2]
  opportunity_pass_through_columns: [my_opp_custom_field]
  user_pass_through_columns: [users_have_custom_fields_too, lets_add_them_all]

Disabling Models

Your connector may not be syncing all tabes that this package references. This might be because you are excluding those tables. If you are not using those tables, you can disable the corresponding functionality in the package by specifying the variable in your dbt_project.yml. By default, all packages are assumed to be true. You only have to add variables for tables you want to disable, like so:

The salesforce__user_role_enabled variable below refers to the user_role table.

# dbt_project.yml

...
config-version: 2

vars:
  salesforce__user_role_enabled: false # Disable if you do not have the user_role table

The corresponding metrics from the disabled tables will not populate in the downstream models.

Salesforce History Mode

If you have Salesforce History Mode enabled for your connector, the source tables will include all historical records. This package is designed to deal with non-historical data. As such, if you have History Mode enabled you will want to set the desired using_[table]_history_mode_active_records variable(s) as true to filter for only active records. These variables are disabled by default; however, you may add the below variable configuration within your dbt_project.yml file to enable the feature.

# dbt_project.yml

...
vars:
  using_account_history_mode_active_records: true      # false by default. Only use if you have history mode enabled.
  using_opportunity_history_mode_active_records: true  # false by default. Only use if you have history mode enabled.
  using_user_role_history_mode_active_records: true    # false by default. Only use if you have history mode enabled.
  using_user_history_mode_active_records: true         # false by default. Only use if you have history mode enabled.

Database support

This package has been tested on BigQuery, Snowflake, Redshift, Postgres, and Databricks.

Databricks Dispatch Configuration

dbt v0.20.0 introduced a new project-level dispatch configuration that enables an "override" setting for all dispatched macros. If you are using a Databricks destination with this package you will need to add the below (or a variation of the below) dispatch configuration within your dbt_project.yml. This is required in order for the package to accurately search for macros within the dbt-labs/spark_utils then the dbt-labs/dbt_utils packages respectively.

# dbt_project.yml

dispatch:
  - macro_namespace: dbt_utils
    search_order: ['spark_utils', 'dbt_utils']

Contributions

Additional contributions to this package are very welcome! Please create issues or open PRs against main. Check out this post on the best workflow for contributing to a package.

Resources:

  • Provide feedback on our existing dbt packages or what you'd like to see next
  • Have questions, feedback, or need help? Book a time during our office hours here or email us at [email protected]
  • Find all of Fivetran's pre-built dbt packages in our dbt hub
  • Learn how to orchestrate dbt transformations with Fivetran here
  • Learn more about Fivetran overall in our docs
  • Check out Fivetran's blog
  • Learn more about dbt in the dbt docs
  • Check out Discourse for commonly asked questions and answers
  • Join the chat on Slack for live discussions and support
  • Find dbt events near you
  • Check out the dbt blog for the latest news on dbt's development and best practices