Skip to content

full360/pipelinewise-target-vertica

Repository files navigation

pipelinewise-target-vertica

License: Apache2

Singer target that loads data into PostgreSQL following the Singer spec.

Singer target that loads data into Vertica following the Singer spec.

This is a PipelineWise compatible target connector.

How to use it

The recommended method of running this target is to use it from PipelineWise. When running it from PipelineWise you don't need to configure this tap with JSON files and most of things are automated.

If you want to run this Singer Target independently please read further.

Install

First, make sure Python 3 is installed on your system or follow these installation instructions for Mac or Ubuntu.

It's recommended to use a virtualenv:

  python3 -m venv venv
  pip install pipelinewise-target-vertica

or

  python3 -m venv venv
  . venv/bin/activate
  pip install --upgrade pip
  pip install .

To run

Like any other target that's following the singer specification:

some-singer-tap | target-vertica --config [config.json]

It's reading incoming messages from STDIN and using the properties in config.json to upload data into Vertica.

Note: To avoid version conflicts run tap and targets in separate virtual environments.

Configuration settings

Running the the target connector requires a config.json file. An example with the minimal settings:

{
  "host": "localhost",
  "port": 5433,
  "user": "my_user",
  "password": "secret",
  "dbname": "my_db_name",
  "default_target_schema": "my_target_schema"
}

Full list of options in config.json:

Property Type Required? Description
host String Yes Vertica host
port Integer Yes Vertica port
user String Yes Vertica user
password String Yes Vertica password
dbname String Yes Vertica database name
batch_size_rows Integer (Default: 100000) Maximum number of rows in each batch. At the end of each batch, the rows in the batch are loaded into Vertica.
flush_all_streams Boolean (Default: False) Flush and load every stream into Vertica when one batch is full. Warning: This may trigger the COPY command to use files with low number of records.
parallelism Integer (Default: 0) The number of threads used to flush tables. 0 will create a thread for each stream, up to parallelism_max. -1 will create a thread for each CPU core. Any other positive number will create that number of threads, up to parallelism_max.
max_parallelism Integer (Default: 16) Max number of parallel threads to use when flushing tables.
default_target_schema String Name of the schema where the tables will be created. If schema_mapping is not defined then every stream sent by the tap is loaded into this schema.
default_target_schema_select_permission String Grant USAGE privilege on newly created schemas and grant SELECT privilege on newly created
schema_mapping Object Useful if you want to load multiple streams from one tap to multiple Vertica schemas.

If the tap sends the stream_id in <schema_name>-<table_name> format then this option overwrites the default_target_schema value. Note, that using schema_mapping you can overwrite the default_target_schema_select_permission value to grant SELECT permissions to different groups per schemas or optionally you can create indices automatically for the replicated tables.

Note: This is an experimental feature and recommended to use via PipelineWise YAML files that will generate the object mapping in the right JSON format. For further info check a PipelineWise YAML Example.
add_metadata_columns Boolean (Default: False) Metadata columns add extra row level information about data ingestion's, (i.e. when was the row read in source, when was inserted or deleted in vertica etc.) Metadata columns are creating automatically by adding extra columns to the tables with a column prefix _SDC_. The column names are following the stitch naming conventions documented at https://www.stitchdata.com/docs/data-structure/integration-schemas#sdc-columns. Enabling metadata columns will flag the deleted rows by setting the _SDC_DELETED_AT metadata column. Without the add_metadata_columns option the deleted rows from singer taps will not be recognizable in Vertica.
hard_delete Boolean (Default: False) When hard_delete option is true then DELETE SQL commands will be performed in Vertica to delete rows in tables. It's achieved by continuously checking the _SDC_DELETED_AT metadata column sent by the singer tap. Due to deleting rows requires metadata columns, hard_delete option automatically enables the add_metadata_columns option as well.
data_flattening_max_level Integer (Default: 0) Object type RECORD items from taps can be transformed to flattened columns by creating columns automatically.

When value is 0 (default) then flattening functionality is turned off.
primary_key_required Boolean (Default: True) Log based and Incremental replications on tables with no Primary Key cause duplicates when merging UPDATE events. When set to true, stop loading data if no Primary Key is defined.
validate_records Boolean (Default: False) Validate every single record message to the corresponding JSON schema. This option is disabled by default and invalid RECORD messages will fail only at load time by Vertica. Enabling this option will detect invalid records earlier but could cause performance degradation.
temp_dir String (Default: platform-dependent) Directory of temporary CSV files with RECORD messages.

To run tests

  1. Define environment variables that requires running the tests

      export TARGET_VERTICA_HOST=<vertica-host>
      export TARGET_VERTICA_PORT=<vertica-port>
      export TARGET_VERTICA_USER=<vertica-password>
      export TARGET_VERTICA_PASSWORD=<vertica-password>
      export TARGET_VERTICA_DBNAME=<vertica-dbname>
      export TARGET_VERTICA_SCHEMA=<vertica-schema>
  2. Install python dependencies in a virtual env and run nose unit and integration tests

      python3 -m venv venv
      . venv/bin/activate
      pip install --upgrade pip
      pip install .[test]
  3. To run unit tests:

      nosetests --where=tests/unit
  4. To run integration tests:

      nosetests --where=tests/integration

To run pylint

  1. Install python dependencies and run python linter

      python3 -m venv venv
      . venv/bin/activate
      pip install --upgrade pip
      pip install .[test]
      pylint --rcfile .pylintrc --disable duplicate-code target_vertica/

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages