About Collectors
Pre-requisites
Python 3.6 - 3.10
Tableau Server Version [2019.3] and above.
Enable the Tableau Metadata API for Tableau Server
This requires a server restart if not enabled
Tableau API access
An API user (record the username and password) needs to be created to access Tableau API.
The user cannot be a SSO user. This is a Tableau limitation. SSO users cannot access Tableau API
User needs
Site Administrator Creator
orServer/Site Administrator
role. Roles are dependent on both Licensing and Server version see https://help.tableau.com/current/server/en-us/users_site_roles.htmSite Administrator Creator
is only available on Role Based Licensing ModelServer/Site Administrator
is available on both Role Based and Core Based Licensing Model
Tableau Repository access
Follow the instructions to create a user that can access the Tableau repositoryhttps://help.tableau.com/current/server/en-us/perf_collect_server_repo.htm
Note the Tableau repository default user is called
readonly
Access to the KADA Collector repository
The repository is currently hosted in Azure Blob. You will be given a SAS token to access the repository. Reach out to KADA Support (support@kada.ai) if you do not have access.
Download the tableau whl (e.g. kada_collectors_extractors_tableau-#.#.#-py3-none-any.whl)
Step 1: Create the Source in K
Create a Tableau source in K
Go to Settings, Select Sources and click Add Source
Select “Load from File” option
Give the source a Name - e.g. Tableau Production
Add the Host name for the Tableau server
Click Finish Setup
After the source is created.
Step 2: Getting Access to the Landing Directory
Install the Collector
It is recommended to use a python environment such as pyenv or pipenv if you are not intending to install this package at the system level.
Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.
Run the following command to install the collector
pip install kada_collectors_extractors_tableau-2.0.0-py3-none-any.whl
Configure the Collector
The collector uses a configuration JSON file to manage connection details and credentials. Below is the configuration file. You will need to create this configuration file and place it in a directory accessible to the collector.
kada_tableau_extractor_config.json
{ "server_address": "", "username": "", "password": "", "sites": [], "db_host": "", "db_username": "readonly", "db_password": "", "db_port": 8060, "db_name": "workgroup", "meta_only": false, "retries": 5, "dry_run": false, "output_path": "/tmp/output", "mask": true, "mapping": {} }
FIELD | FIELD TYPE | DESCRIPTION | EXAMPLE |
---|---|---|---|
server_address | string | Tableau server address inclusive of http/https | |
username | string | Username to log into tableau api | “tabadmin” |
password | string | Password to log into tableau api | |
sites | list<string> | List of specific sites that you wish to extract, if left as [] it will extract all sites. | [] |
db_host | string | This is generally the same as server address less the http/https | “10.1.19.15” |
db_username | string | By default the tableau database use is readonly should not need to change this unless you actively manage the database | “readonly” |
db_password | list<string> | Password for the database user | |
db_port | integer | Default is 8060 unless your tableau is configured differently | 8060 |
db_name | string | Default database to use is workgroup | “workgroup” |
meta_only | boolean | If for some reason you want to extract meta only set this to true otherwise leave it as false | false |
retries | integer | Number of retries that the extractor should hit the API incase of intermittent failures, default is 5 | 5 |
dry_run | boolean | By doing a dry run you produce the mapping.json file which is used to populate the mapping field below. It is recommended you do a dry run first to see what databases are available to map. | true |
output_path | string | Absolute path to the output location where files are to be written | “/tmp/output” |
mask | boolean | To enable masking or not | true |
mapping | json | This should be populate with the mapping.json output where each data source name mentioned is mapped to an onboarded K host | Where analytics.adw is the onboarded database in K { "somehost.adw": "analytics.adw" } |
Run the Collector
Run the following command to run the collector
kada_tableau_extractor.py
import os import argparse from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger from kada_collectors.extractors.tableau import Extractor get_generic_logger('root') # Set to use the root logger, you can change the context accordingly or define your own logger _type = 'tableau' dirname = os.path.dirname(__file__) filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type)) parser = argparse.ArgumentParser(description='KADA Tableau Extractor.') parser.add_argument('--config', '-c', dest='config', default=filename, help='Location of the configuration json, default is the config json in the same directory as the script.') args = parser.parse_args() start_hwm, end_hwm = get_hwm(_type) ext = Extractor(**load_config(args.config)) ext.test_connection() ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm}) publish_hwm(_type, end_hwm)
This can be executed anywhere that has the wheel installed.
Outputs from the Collector
Extracts to load to K
A set of files (metadata, log, roles etc) will be generated. These files will appear in the output_path directory set in the configuration
High Water Mark File
A high water mark file is created in the same directory as the execution called tableau_hwm.txt and produce files according to the configuration JSON.
The High Water Mark file is used to mark when the last run was executed.
Give the source a Name - e.g. Tableau Production
Add the Host name for the Tableau server
Click Finish Setup
After the source is created. Go and edit the source.
Record the Storage Location
Upload the files to the Storage Location
Example orchestrating the Collector using Airflow
The following example is how you can orchestrate the Tableau collector using Airflow and push the files to K hosted on Azure. The code is not expected to be used as-is but as a template for your own code.
# built-in import os # Installed from airflow.operators.python_operator import PythonOperator from airflow.models.dag import DAG from airflow.operators.dummy import DummyOperator from airflow.utils.dates import days_ago from airflow.utils.task_group import TaskGroup from plugins.utils.azure_blob_storage import AzureBlobStorage from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger from kada_collectors.extractors.tableau import Extractor # To be configed by the customer. # Note variables may change if using a different object store. KADA_SAS_TOKEN = os.getenv("KADA_SAS_TOKEN") KADA_CONTAINER = "" KADA_STORAGE_ACCOUNT = "" KADA_LANDING_PATH = "lz/tableau/landing" KADA_EXTRACTOR_CONFIG = { "server_address": "http://tabserver", "username": "user", "password": "password", "sites": [], "db_host": "tabserver", "db_username": "repo_user", "db_password": "repo_password", "db_port": 8060, "db_name": "workgroup", "meta_only": False, "retries": 5, "dry_run": False, "output_path": "/set/to/output/path", "mask": True, "mapping": {} } # To be implemented by the customer. # Upload to your landing zone storage. def upload(): output = KADA_EXTRACTOR_CONFIG['output_path'] for filename in os.listdir(output): if filename.endswith('.csv'): file_to_upload_path = os.path.join(output, filename) AzureBlobStorage.upload_file_sas_token( client=KADA_SAS_TOKEN, storage_account=KADA_STORAGE_ACCOUNT, container=KADA_CONTAINER, blob=f'{KADA_LANDING_PATH}/{filename}', local_path=file_to_upload_path ) with DAG(dag_id="taskgroup_example", start_date=days_ago(1)) as dag: # To be implemented by the customer. # Retrieve the timestamp from the prior run start_hwm = 'YYYY-MM-DD HH:mm:SS' end_hwm = 'YYYY-MM-DD HH:mm:SS' # timestamp now ext = Extractor(**KADA_EXTRACTOR_CONFIG) start = DummyOperator(task_id="start") with TaskGroup("taskgroup_1", tooltip="extract tableau and upload") as extract_upload: task_1 = PythonOperator( task_id="extract_tableau", python_callable=ext.run, op_kwargs={"start_hwm": start_hwm, "end_hwm": end_hwm}, provide_context=True, ) task_2 = PythonOperator( task_id="upload_extracts", python_callable=upload, op_kwargs={}, provide_context=True, ) # To be implemented by the customer. # Timestamp needs to be saved for next run task_3 = DummyOperator(task_id='save_hwm') end = DummyOperator(task_id='end') start >> extract_upload >> end
Advanced Usage
If you wish to maintain your own high water mark files else where you can use the above section’s script as a guide on how to call the extractor. The configuration file is simply the keyword arguments in JSON format.
If you are handling external arguments of the runner yourself, you’ll need to consider the following for the run method https://kadaai.atlassian.net/wiki/spaces/DAT/pages/1894318152/Notes+v2.0.0#The-run-method
from kada_collectors.extractors.tableau import Extractor kwargs = {my args} # However you choose to construct your args hwm_kwrgs = {"start_hwm": "end_hwm": } # The hwm values ext = Extractor(**kwargs) ext.run(**hwm_kwrgs)