Scroll ignore | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||
About Collectors
Collectors are extractors that are developed and managed by you (A customer of K).
...
Deploying and orchestrating the extract code
Managing a high water mark so the extract only pull the latest metadata
Storing and pushing the extracts to your K instance.
...
Pre-requisites
...
Python 3.6 - 3.9
...
Collector Server Minimum Requirements
Insert excerpt | ||||||||
---|---|---|---|---|---|---|---|---|
|
Redshift Requirements
Access to Redshift (see section below)
Redshift Access
Log into Redshift as a Superuser. Superuser access is required to complete the following steps.
...
The collector requires a set of parameters to connect to and extract metadata from Redshift.
FIELD | FIELD TYPE | DESCRIPTION | EXAMPLE |
---|---|---|---|
host | string | Redshift host. This value must to match the host name onboarded in K. | |
server | string | Redshift DNS Name / IP address used to construct the connection string. This allows the collector to connect to Redshift on a different IP / DNS Name to the onboarded host name in K. In most cases this value will be the same as host, unless you are unable to connect using the host name due to networking configurations. | OR 10.1.1.2 |
username | string | Username to log into Redshift | “test” |
password | string | Password to log into the Redshift | |
databases | list<string> | A list of databases to extract from Redshift | [“dwh”, “adw”] |
port | integer | Redshift port, general default is 5439 | 5439 |
tunnel | boolean | Are you establishing an SSH tunnel to get to your redshift? If so specify true so it changes the connection to localhost. The SSH tunnel needs to be established before running the collector. | false |
output_path | string | Absolute path to the output location where files are to be written | “/tmp/output” |
mask | boolean | To enable masking or not | true |
compress | boolean | To gzip the output or not | true |
These parameters can be added directly into the run or you can use pass the parameters in via a JSON file. The following is an example you can use that is included in the example run code below.
...
Code Block | ||
---|---|---|
| ||
import os import argparse from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger from kada_collectors.extractors.redshift import Extractor get_generic_logger('root') # Set to use the root logger, you can change the context accordingly or define your own logger _type = 'redshift' dirname = os.path.dirname(__file__) filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type)) parser = argparse.ArgumentParser(description='KADA Redshift Extractor.') parser.add_argument('--config', '-c', dest='config', default=filename, help='Location of the configuration json, default is the config json in the same directory as the script.') parser.add_argument('--name', '-n', dest='name', default=_type, help='Name of the collector instance.') args = parser.parse_args() start_hwm, end_hwm = get_hwm(_typeargs.name) ext = Extractor(**load_config(args.config)) ext.test_connection() ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm}) publish_hwm(_type, end_hwm) |
...