Scroll ignore | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||
About Collectors
Collectors are extractors that are developed and managed by you (A customer of K).
...
There are several reasons why you may use a collector vs the direct connect extractor:
You are using the KADA SaaS offering and it cannot connect to your sources due to firewall restrictions
You want to push metadata to KADA rather than allow it pull data for Security reasons
You want to inspect the metadata before pushing it to K
Using a collector requires you to manage
Deploying and orchestrating the extract code
Managing a high water mark so the extract only pull the latest metadata
Storing and pushing the extracts to
...
Pre-requisites
...
Python 3.6 - 3.9
Access to the KADA Collector repository that contains the Redshift whl
...
The repository is currently hosted in KADA’s Azure Blob Storage. You will be given a SAS token to access the repository. Reach out to KADA Support (support@kada.ai) if you do not have access.
...
your K instance.
...
Pre-requisites
Python 3.6 - 3.9
Access to K landing directory
Access to Redshift (see section below)
...
Create a Redshift user. This user MUST be either (one or the other below, we generally recommend 2.)
Be a Superuser. Refer to https://docs.aws.amazon.com/redshift/latest/dg/r_superusers.html to view all required data.
Code Block ALTER USER <kada user> CREATEUSER; -- GRANTS SUPERUSER
Be a Database user with:
Unrestricted SYSLOG ACCESS refer to https://docs.aws.amazon.com/redshift/latest/dg/c_visibility-of-data.html. This will allow full access to the STL tables for the user.
Code Block language sql ALTER USER <kada user> SYSLOG ACCESS UNRESTRICTED; -- GRANTS READ ACCESS
Select Access to existing and future tables in all Schemas for each Database you want K to ingest.
List all existing Schema in the Database by running
Code Block language sql SELECT DISTINCT schema_name FROM svv_all_tables; -- LIST ALL SCHEMAS
For each schema above do the following to allow the user select access to all tables inside the Schema and any new tables created in the schema thereafter.
You also must do this for ANY new schemas created in the Database to ensure K has view of it.
Code Block language sql GRANT USAGE ON SCHEMA <schema name> TO <kada user>; GRANT SELECT ON ALL TABLES IN SCHEMA <schema name> TO <kada user>; ALTER DEFAULT PRIVILEGES IN SCHEMA <schema name> GRANT SELECT ON TABLES TO <kada user>;
PG Catalog
The PG tables are granted per database but generally all users should have access to them on DB creation. In the event the user doesn’t have access, explicit grants will need to be done per new DB in Redshift.
...
dev (The extractor uses the dev database as a test access point)
All other databases that you want onboarded
...
Step 1: Create the Source in K
Create a Redshift source in K
...
Give the source a Name - e.g. Redshift Production
Add the Host name for the Redshift Server
Click Finish Setup
...
Step 2: Getting Access to the Source Landing Directory
Insert excerpt | ||||||
---|---|---|---|---|---|---|
|
...
Step 3: Install the Collector
It is recommended to use a python environment such as pyenv or pipenv if you are not intending to install this package at the system level.
Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.
You can download the Latest Core Library and Redshift whl via Platform Settings → Sources → Download Collectors
...
Run the following command to install the collector
Code Block |
---|
pip install kada_collectors_extractors_redshift-#.#.#-py3-none-any.whl |
...
Step 4: Configure the Collector
The collector requires a set of parameters to connect to and extract metadata from Redshift.
...
Code Block | ||
---|---|---|
| ||
{ "host": "", "username": "", "password": "", "databases": [], "port": 5439, "tunnel": false, "output_path": "/tmp/output", "mask": true } |
...
Step 5: Run the Collector
The following code is an example of how to run the extractor. You may need to uplift this code to meet any code standards at your organisation.
...
If you are handling external arguments of the runner yourself, you’ll need to consider additional items for the run method. Refer to this document for more information https://kadaai.atlassian.net/wiki/spaces/KSL/pages/1902411777/Additional+Notes#The-run-method
...
Step 6: Check the Collector Outputs
K Extracts
A set of files (eg metadata, databaselog, linkages, events etc) will be generated. These files will appear in the output_path directory you set in the configuration details
...
A high water mark file is created in the same directory as the execution called redshift_hwm.txt and produce files according to the configuration JSON. This file is only produced if you call the publish_hwm method.
...
Step 7: Push the Extracts to K
Once the files have been validated, you can push the files to the K landing directory.
You can use Azure Storage Explorer if you want to initially do this manually. You can push the files using python as well (see Airflow example below)
...
Example: Using Airflow to orchestrate the Extract and Push to K
Code Block | ||
---|---|---|
| ||
# built-in import os # Installed from airflow.operators.python_operator import PythonOperator from airflow.models.dag import DAG from airflow.operators.dummy import DummyOperator from airflow.utils.dates import days_ago from airflow.utils.task_group import TaskGroup from plugins.utils.azure_blob_storage import AzureBlobStorage from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger from kada_collectors.extractors.tableau import Extractor # To be configed by the customer. # Note variables may change if using a different object store. KADA_SAS_TOKEN = os.getenv("KADA_SAS_TOKEN") KADA_CONTAINER = "" KADA_STORAGE_ACCOUNT = "" KADA_LANDING_PATH = "lz/tableau/landing" KADA_EXTRACTOR_CONFIG = { "server_address": "http://tabserver", "username": "user", "password": "password", "sites": [], "db_host": "tabserver", "db_username": "repo_user", "db_password": "repo_password", "db_port": 8060, "db_name": "workgroup", "meta_only": False, "retries": 5, "dry_run": False, "output_path": "/set/to/output/path", "mask": True, "mapping": {} } # To be implemented by the customer. # Upload to your landing zone storage. def upload(): output = KADA_EXTRACTOR_CONFIG['output_path'] for filename in os.listdir(output): if filename.endswith('.csv'): file_to_upload_path = os.path.join(output, filename) AzureBlobStorage.upload_file_sas_token( client=KADA_SAS_TOKEN, storage_account=KADA_STORAGE_ACCOUNT, container=KADA_CONTAINER, blob=f'{KADA_LANDING_PATH}/{filename}', local_path=file_to_upload_path ) with DAG(dag_id="taskgroup_example", start_date=days_ago(1)) as dag: # To be implemented by the customer. # Retrieve the timestamp from the prior run start_hwm = 'YYYY-MM-DD HH:mm:SS' end_hwm = 'YYYY-MM-DD HH:mm:SS' # timestamp now ext = Extractor(**KADA_EXTRACTOR_CONFIG) start = DummyOperator(task_id="start") with TaskGroup("taskgroup_1", tooltip="extract tableau and upload") as extract_upload: task_1 = PythonOperator( task_id="extract_tableau", python_callable=ext.run, op_kwargs={"start_hwm": start_hwm, "end_hwm": end_hwm}, provide_context=True, ) task_2 = PythonOperator( task_id="upload_extracts", python_callable=upload, op_kwargs={}, provide_context=True, ) # To be implemented by the customer. # Timestamp needs to be saved for next run task_3 = DummyOperator(task_id='save_hwm') end = DummyOperator(task_id='end') start >> extract_upload >> end |
...