Scroll ignore | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||
About Collectors
Collectors are extractors that are developed and managed by you (A customer of K).
...
Deploying and orchestrating the extract code
Managing a high water mark so the extract only pull the latest metadata
Storing and pushing the extracts to your K instance.
...
Pre-requisites
Collector Server Minimum Requirements
Insert excerpt | ||||||||
---|---|---|---|---|---|---|---|---|
|
Postgres Greenplum Requirements
Access to Postgres Greenplum
The user used for the extractor will need access to a number of pg_catalog tables outlined below
...
Generally all users should have access to the pg_catalog tables on DB creation. In the event the user doesn’t have access, explicit grants will need to be done per new DB in PostgresGreenplum.
Code Block | ||
---|---|---|
| ||
GRANT USAGE ON SCHEMA pg_catalog TO <kada user>; GRANT SELECT ON ALL TABLES IN SCHEMA pg_catalog TO <kada user>; |
...
These tables are per database in PostgresGreenplum
pg_attribute
pg_class
pg_namespace
pg_proc
pg_database
pg_language
pg_type
pg_collation
pg_depend
pg_sequencepg_constraint
pg_authidroles
pg_auth_members
Databases
The user must also be able to connect to all databases that you want onboarded.
...
Step 1: Create the Source in K
Create a Postgres Greenplum source in K
Go to Settings, Select Sources and click Add Source
Select “Load from File” option
Give the source a Name - e.g. Postgres Greenplum Production
Add the Host name for the Postgres Greenplum Server
Click Finish Setup
...
Step 2: Getting Access to the Source Landing Directory
Insert excerpt | ||||||
---|---|---|---|---|---|---|
|
...
Step 3: Install the Collector
It is recommended to use a python environment such as pyenv or pipenv if you are not intending to install this package at the system level.
...
Code Block |
---|
pip install kada_collectors_lib-<version>-none-any.whl |
...
Step 4: Configure the Collector
The collector requires a set of parameters to connect to and extract metadata from PostgresGreenplum.
FIELD | FIELD TYPE | DESCRIPTION | EXAMPLE |
---|---|---|---|
host | string | Postgres Greenplum host as per what was onboarded in the K platform, generally we onboard it as the same value as server, but if you did it differently, use that value | “example.postgresgreenplum.localhost” |
server | string | Postgres Greenplum host to establish a connection | “example.postgresgreenplum.localhost” |
username | string | Username to log into PostgresGreenplum | “postgres“greenplum_user” |
password | string | Password to log into the PostgresGreenplum | |
databases | list<string> | A list of databases to extract from PostgresGreenplum | [“dwh”, “adw”] |
port | integer | Postgres Greenplum port, general default is 5432 | 5432 |
output_path | string | Absolute path to the output location where files are to be written | “/tmp/output” |
mask | boolean | To enable masking or not | true |
compress | boolean | To gzip the output or not | true |
meta_only | boolean | To extract metadata only or not, note as of this current version only metadata can be extracted regardless of this value | true |
These parameters can be added directly into the run or you can use pass the parameters in via a JSON file. The following is an example you can use that is included in the example run code below.
kada_postgresgeenplum_extractor_config.json
Code Block | ||
---|---|---|
| ||
{ "host": "", "server": "", "username": "", "password": "", "databases": [], "port": 5432, "output_path": "/tmp/output", "mask": true, "compress": true, "meta_only": true } |
...
Step 5: Run the Collector
The following code is an example of how to run the extractor. You may need to uplift this code to meet any code standards at your organisation.
...
This is the wrapper script: kada_postgresgreenplum_extractor.py
Code Block | ||
---|---|---|
| ||
import os import argparse from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger from kada_collectors.extractors.postgresgreenplum import Extractor get_generic_logger('root') # Set to use the root logger, you can change the context accordingly or define your own logger _type = 'postgresgreenplum' dirname = os.path.dirname(__file__) filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type)) parser = argparse.ArgumentParser(description='KADA PostgresGreenplum Extractor.') parser.add_argument('--config', '-c', dest='config', default=filename, help='Location of the configuration json, default is the config json in the same directory as the script.') parser.add_argument('--name', '-n', dest='name', default=_type, help='Name of the collector instance.') args = parser.parse_args() start_hwm, end_hwm = get_hwm(args.name) ext = Extractor(**load_config(args.config)) ext.test_connection() ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm}) publish_hwm(_typeargs.name, end_hwm) |
Advance options:
...
username: username to sign into PostgresGreenplum
password: password to sign into PostgresGreenplum
host: Onboarded value for the Postgres Greenplum server in K
server: Host address to the Postgres Greenplum Service for a connection
databases: list of databases to extract, no spaces
port: postgres Greenplum port
output_path: full or relative path to where the outputs should go
mask: To mask the META/DATABASE_LOG files or not
compress: To gzip output files or not
meta_only: To extract metadata only or not
...
Step 6: Check the Collector Outputs
K Extracts
A set of files (eg metadata, databaselog, linkages, events etc) will be generated. These files will appear in the output_path directory you set in the configuration details
...
A high water mark file is created in the same directory as the execution called postgresgreenplum_hwm.txt and produce files according to the configuration JSON. This file is only produced if you call the publish_hwm method.
...
Step 7: Push the Extracts to K
Once the files have been validated, you can push the files to the K landing directory.
You can use Azure Storage Explorer if you want to initially do this manually. You can push the files using python as well (see Airflow example below)
...
Example: Using Airflow to orchestrate the Extract and Push to K
Code Block | ||
---|---|---|
| ||
# built-in import os # Installed from airflow.operators.python_operator import PythonOperator from airflow.models.dag import DAG from airflow.operators.dummy import DummyOperator from airflow.utils.dates import days_ago from airflow.utils.task_group import TaskGroup from plugins.utils.azure_blob_storage import AzureBlobStorage from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger from kada_collectors.extractors.tableau import Extractor # To be configed by the customer. # Note variables may change if using a different object store. KADA_SAS_TOKEN = os.getenv("KADA_SAS_TOKEN") KADA_CONTAINER = "" KADA_STORAGE_ACCOUNT = "" KADA_LANDING_PATH = "lz/tableau/landing" KADA_EXTRACTOR_CONFIG = { "server_address": "http://tabserver", "username": "user", "password": "password", "sites": [], "db_host": "tabserver", "db_username": "repo_user", "db_password": "repo_password", "db_port": 8060, "db_name": "workgroup", "meta_only": False, "retries": 5, "dry_run": False, "output_path": "/set/to/output/path", "mask": True, "mapping": {} } # To be implemented by the customer. # Upload to your landing zone storage. def upload(): output = KADA_EXTRACTOR_CONFIG['output_path'] for filename in os.listdir(output): if filename.endswith('.csv'): file_to_upload_path = os.path.join(output, filename) AzureBlobStorage.upload_file_sas_token( client=KADA_SAS_TOKEN, storage_account=KADA_STORAGE_ACCOUNT, container=KADA_CONTAINER, blob=f'{KADA_LANDING_PATH}/{filename}', local_path=file_to_upload_path ) with DAG(dag_id="taskgroup_example", start_date=days_ago(1)) as dag: # To be implemented by the customer. # Retrieve the timestamp from the prior run start_hwm = 'YYYY-MM-DD HH:mm:SS' end_hwm = 'YYYY-MM-DD HH:mm:SS' # timestamp now ext = Extractor(**KADA_EXTRACTOR_CONFIG) start = DummyOperator(task_id="start") with TaskGroup("taskgroup_1", tooltip="extract tableau and upload") as extract_upload: task_1 = PythonOperator( task_id="extract_tableau", python_callable=ext.run, op_kwargs={"start_hwm": start_hwm, "end_hwm": end_hwm}, provide_context=True, ) task_2 = PythonOperator( task_id="upload_extracts", python_callable=upload, op_kwargs={}, provide_context=True, ) # To be implemented by the customer. # Timestamp needs to be saved for next run task_3 = DummyOperator(task_id='save_hwm') end = DummyOperator(task_id='end') start >> extract_upload >> end |
...