Scroll ignore | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||
About Collectors
Collectors are extractors that are developed and managed by you (A customer of K).
...
Deploying and orchestrating the extract code
Managing a high water mark so the extract only pull the latest metadata
Storing and pushing the extracts to your K instance.
...
Pre-requisites
Python 3.6 - 3.9
Access to K landing directory
Access to Redshift (see section below)
Redshift Access
Log into Redshift as a Superuser. Superuser access is required to complete the following steps.
Create a Redshift user. This user MUST be either (one or the other below, we generally recommend 2.)
...
Be a Superuser. Refer to https://docs.aws.amazon.com/redshift/latest/dg/r_superusers.html to view all required data.
Code Block |
---|
ALTER USER <kada user> CREATEUSER; -- GRANTS SUPERUSER |
Be a Database user with:
Unrestricted SYSLOG ACCESS refer to https://docs.aws.amazon.com/redshift/latest/dg/c_visibility-of-data.html. This will allow full access to the STL tables for the user.
Code Block | ||
---|---|---|
| ||
ALTER USER <kada user> SYSLOG ACCESS UNRESTRICTED; -- GRANTS READ ACCESS |
Select Access to existing and future tables in all Schemas for each Database you want K to ingest.
List all existing Schema in the Database by running
Code Block | ||
---|---|---|
| ||
SELECT DISTINCT schema_name FROM svv_all_tables; -- LIST ALL SCHEMAS |
For each schema above do the following to allow the user select access to all tables inside the Schema and any new tables created in the schema thereafter.
...
Collector Server Minimum Requirements
Insert excerpt | ||||||||
---|---|---|---|---|---|---|---|---|
|
Postgres Requirements
Access to Postgres
The user used for the extractor will need access to a number of pg_catalog tables outlined below
PG Catalog
Generally all users should have access to the pg_catalog tables on DB creation. In the event the user doesn’t have access, explicit grants will need to be done per new DB in Postgres.
Code Block | ||
---|---|---|
| ||
GRANT USAGE ON SCHEMA pg_catalog TO <kada user>;
GRANT SELECT ON ALL TABLES IN SCHEMA pg_catalog TO <kada user>; |
The user used for the extraction must also be able to connect to the the databases needed for extraction.
PG Tables
These tables are per database in Postgres
pg_class
pg_namespace
pg_proc
pg_database
pg_language
pg_type
pg_collation
pg_depend
pg_sequence
pg_constraint
pg_authid
pg_auth_members
Databases
All other databases that you want onboarded
Info | |||||
---|---|---|---|---|---|
Note that visibility of entries in these tables will depend on if the user has SELECT access to the table, so make sure SELECT is granted to the <kada user> for all tables within the database. You may need to re-apply this grant if schemas are dropped, you may also wish to apply a default grant on the schema so future tables can be visible.
|
...
...
|
...
|
...
|
...
|
...
|
...
|
...
PG Catalog
The PG tables are granted per database but generally all users should have access to them on DB creation. In the event the user doesn’t have access, explicit grants will need to be done per new DB in Redshift.
Code Block | ||
---|---|---|
| ||
GRANT USAGE ON SCHEMA pg_catalog TO <kada user>;
GRANT SELECT ON ALL TABLES IN SCHEMA pg_catalog TO <kada user>; |
The user used for the extraction must also be able to connect to the the databases needed for extraction.
PG Tables
These tables are per database in Redshift
pg_class
pg_user
pg_group
pg_namespace
pg_proc
pg_database
System Tables
These tables can be accessed in any database and reads from the leader node in Redshift
svv_all_columns
svv_all_tables
svv_tables
svv_external_tables
svv_external_schemas
stl_query
stl_querytext
stl_ddltext
stl_utilitytext
stl_query_metrics
stl_sessions
stl_connection_log
Databases
...
dev (The extractor uses the dev database as a test access point)
...
|
...
Step 1: Create the Source in K
Create a Redshift Postgres source in K
Go to Settings, Select Sources and click Add Source
Select “Load from File” option
Give the source a Name - e.g. Redshift Postgres Production
Add the Host name for the Redshift Postgres Server
Click Finish Setup
...
Step 2: Getting Access to the Source Landing Directory
Insert excerpt | ||||||
---|---|---|---|---|---|---|
|
...
Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.
You can download the latest Core Library and whl via Platform Settings → Sources → Download Collectors
...
install.
You can download the latest Core Library via Platform Settings → Sources → Download Collectors
...
You can request the whl from the Kada support team (support@kada.ai).
Info |
---|
From 5.33 (Late October 2023) you can download the whl directly from the Platform |
Run the following command to install the collector.
...
The collector requires a set of parameters to connect to and extract metadata from RedshiftPostgres.
FIELD | FIELD TYPE |
---|
DESCRIPTION
EXAMPLE
host
string
Redshift host
username
string
Username to log into Redshift
“test”
password
string
Password to log into the Redshift
databases
list<string>
A list of databases to extract from Redshift
[“dwh”, “adw”]
port
integer
Redshift port, general default is 5439
5439
tunnel
boolean
Are you establishing an SSH tunnel to get to your redshift? If so specify true so it changes the connection to localhost.
The SSH tunnel needs to be established before running the collector.
DESCRIPTION | EXAMPLE | ||
---|---|---|---|
host | string | Postgres host as per what was onboarded in the K platform, generally we onboard it as the same value as server, but if you did it differently, use that value | “example.postgres.localhost” |
server | string | Postgres host to establish a connection | “example.postgres.localhost” |
username | string | Username to log into Postgres | “postgres_user” |
password | string | Password to log into the Postgres | |
databases | list<string> | A list of databases to extract from Postgres | [“dwh”, “adw”] |
port | integer | Postgres port, general default is 5432 | 5432 |
output_path | string | Absolute path to the output location where files are to be written | “/tmp/output” |
mask | boolean | To enable |
masking or not | true | ||
compress | boolean | To gzip the output or not | true |
meta_only | boolean | To |
extract metadata only or not, note as of this current version only metadata can be extracted regardless of this value | true |
These parameters can be added directly into the run or you can use pass the parameters in via a JSON file. The following is an example you can use that is included in the example run code below.
kada_redshiftpostgres_extractor_config.json
Code Block | ||
---|---|---|
| ||
{ "host": "", "usernameserver": "", "passwordusername": "", "databasespassword": []"", "portdatabases": 5439[], "tunnelport": false5432, "output_path": "/tmp/output", "mask"mask": true, "compress": true, "compressmeta_only": true } |
...
Step 5: Run the Collector
...
This is the wrapper script: kada_redshiftpostgres_extractor.py
Code Block | ||
---|---|---|
| ||
import os import argparse from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger from kada_collectors.extractors.redshiftpostgres import Extractor get_generic_logger('root') # Set to use the root logger, you can change the context accordingly or define your own logger _type = 'redshiftpostgres' dirname = os.path.dirname(__file__) filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type)) parser = argparse.ArgumentParser(description='KADA RedshiftPostgres Extractor.') parser.add_argument('--config', '-c', dest='config', default=filename, help='Location of the configuration json, default is the config json in the same directory as the script in the same directory as the script.') parser.add_argument('--name', '-n', dest='name', default=_type, help='Name of the collector instance.') args = parser.parse_args() start_hwm, end_hwm = get_hwm(_typeargs.name) ext = Extractor(**load_config(args.config)) ext.test_connection() ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm}) publish_hwm(_type, end_hwm) |
...
Code Block |
---|
class Extractor(username: str = None, password: str = None, host: str = None, \ server: str = None, databases: list = [], port: int = 54395432, \ tunnel: bool = False, output_path: str = './output', mask: bool = False, \compress: bool = False, \ mask: bool = False, compressmeta_only: bool = False) -> None |
username: username to sign into RedshiftPostgres
password: password to sign into RedshiftPostgres
host: Onboarded value for the Postgres server in K
server: Host address to the Redshift Postgres Service for a connection
databases: list of databases to extract, no spaces
port: redshift postgres port
tunnel: Is a SSH tunnel being used? If yes then it will default to localhost
output_path: full or relative path to where the outputs should go
mask: To mask the META/DATABASE_LOG files or not
compress: To gzip output files or not
meta_only: To extract metadata only or not
...
Step 6: Check the Collector Outputs
...
A high water mark file is created in the same directory as the execution called redshiftpostgres_hwm.txt and produce files according to the configuration JSON. This file is only produced if you call the publish_hwm method.
...