Greenplum (via Collector method) - v3.0.0
About Collectors
Collectors are extractors that are developed and managed by you (A customer of K).
KADA provides python libraries that customers can use to quickly deploy a Collector.
Why you should use a Collector
There are several reasons why you may use a collector vs the direct connect extractor:
You are using the KADA SaaS offering and it cannot connect to your sources due to firewall restrictions
You want to push metadata to KADA rather than allow it pull data for Security reasons
You want to inspect the metadata before pushing it to K
Using a collector requires you to manage
Deploying and orchestrating the extract code
Managing a high water mark so the extract only pull the latest metadata
Storing and pushing the extracts to your K instance.
Pre-requisites
Collector Server Minimum Requirements
Greenplum Requirements
Access to Greenplum
The user used for the extractor will need access to a number of pg_catalog tables outlined below
PG Catalog
Generally all users should have access to the pg_catalog tables on DB creation. In the event the user doesn’t have access, explicit grants will need to be done per new DB in Greenplum.
GRANT USAGE ON SCHEMA pg_catalog TO <kada user>;
GRANT SELECT ON ALL TABLES IN SCHEMA pg_catalog TO <kada user>;
The user used for the extraction must also be able to connect to the the databases needed for extraction.
PG Tables
These tables are per database in Greenplum
pg_attribute
pg_class
pg_namespace
pg_proc
pg_database
pg_language
pg_type
pg_collation
pg_depend
pg_constraint
pg_roles
pg_auth_members
Databases
The user must also be able to connect to all databases that you want onboarded.
Step 1: Create the Source in K
Create a Greenplum source in K
Go to Settings, Select Sources and click Add Source
Select “Load from File” option
Give the source a Name - e.g. Greenplum Production
Add the Host name for the Greenplum Server
Click Finish Setup
Step 2: Getting Access to the Source Landing Directory
Step 3: Install the Collector
It is recommended to use a python environment such as pyenv or pipenv if you are not intending to install this package at the system level.
Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.
You can download the latest Core Library via Platform Settings → Sources → Download Collectors
You can request the whl from the Kada support team (support@kada.ai).
From 5.33 (Late October 2023) you can download the whl directly from the Platform
Run the following command to install the collector.
pip install kada_collectors_extractors_<version>-none-any.whl
You will also need to install the common library kada_collectors_lib for this collector to function properly.
pip install kada_collectors_lib-<version>-none-any.whl
Step 4: Configure the Collector
The collector requires a set of parameters to connect to and extract metadata from Greenplum.
FIELD | FIELD TYPE | DESCRIPTION | EXAMPLE |
---|---|---|---|
host | string | Greenplum host as per what was onboarded in the K platform, generally we onboard it as the same value as server, but if you did it differently, use that value | “example.greenplum.localhost” |
server | string | Greenplum host to establish a connection | “example.greenplum.localhost” |
username | string | Username to log into Greenplum | “greenplum_user” |
password | string | Password to log into the Greenplum |
|
databases | list<string> | A list of databases to extract from Greenplum | [“dwh”, “adw”] |
port | integer | Greenplum port, general default is 5432 | 5432 |
output_path | string | Absolute path to the output location where files are to be written | “/tmp/output” |
mask | boolean | To enable masking or not | true |
compress | boolean | To gzip the output or not | true |
meta_only | boolean | To extract metadata only or not, note as of this current version only metadata can be extracted regardless of this value | true |
These parameters can be added directly into the run or you can use pass the parameters in via a JSON file. The following is an example you can use that is included in the example run code below.
kada_geenplum_extractor_config.json
Step 5: Run the Collector
The following code is an example of how to run the extractor. You may need to uplift this code to meet any code standards at your organisation.
This can be executed in any python environment where the whl has been installed.
This is the wrapper script: kada_greenplum_extractor.py
Advance options:
If you wish to maintain your own high water mark files elsewhere you can use the above section’s script as a guide on how to call the extractor. The configuration file is simply the keyword arguments in JSON format. Refer to this document for more information Collector Integration General Notes | Storing HWM in another location
If you are handling external arguments of the runner yourself, you’ll need to consider additional items for the run method. Refer to this document for more information Collector Integration General Notes | The run method
username: username to sign into Greenplum
password: password to sign into Greenplum
host: Onboarded value for the Greenplum server in K
server: Host address to the Greenplum Service for a connection
databases: list of databases to extract, no spaces
port: Greenplum port
output_path: full or relative path to where the outputs should go
mask: To mask the META/DATABASE_LOG files or not
compress: To gzip output files or not
meta_only: To extract metadata only or not
Step 6: Check the Collector Outputs
K Extracts
A set of files (eg metadata, databaselog, linkages, events etc) will be generated. These files will appear in the output_path directory you set in the configuration details
High Water Mark File
A high water mark file is created in the same directory as the execution called greenplum_hwm.txt and produce files according to the configuration JSON. This file is only produced if you call the publish_hwm method.
Step 7: Push the Extracts to K
Once the files have been validated, you can push the files to the K landing directory.
You can use Azure Storage Explorer if you want to initially do this manually. You can push the files using python as well (see Airflow example below)
Example: Using Airflow to orchestrate the Extract and Push to K