Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Python 3.6+

  • Tableau Server Version [20182019.13] and above.

  • Enable the Tableau Metadata API for Tableau Server

  • Record your Tableau server host

  • Create an API user for the Tableau Metadata API.

    • Record the Credentials (Username & Password)

    • The user must be Site Administrator Creator , Server Administrator or Site Administrator

  • Record Tableau Postgres Database host

  • Create a DB user for the Tableau Postgres Database

    • Record the Credentials (Username & Password)

    • Ben what does the user need access to?

Install the Collector

...

    • This requires a server restart if not enabled

  • Tableau API access

    • An API user (record the username and password) needs to be created to access Tableau API.

    • The user cannot be a SSO user. This is a Tableau limitation. SSO users cannot access Tableau API

    • User needs Site Administrator Creator or Server/Site Administrator role. Roles are dependent on both Licensing and Server version see https://help.tableau.com/current/server/en-us/users_site_roles.htm

      • Site Administrator Creator is only available on Role Based Licensing Model

      • Server/Site Administrator is available on both Role Based and Core Based Licensing Model

  • Tableau Server access

  • Access to the KADA Collector repository

    • The repository is currently hosted in Azure Blob. You will be given a SAS token to access the repository.

    • Download the tableau whl (e.g. kada_collectors_extractors_tableau-2.0.0-py3-none-any.whl)

Install the Collector

It is recommended to use a python environment such as pyenv or pipenv if you are not intending to install this package at the system level.

Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.

Run the following command to install the collector

Code Block
pip install pipenv
pipenv install kada_collectors_extractors_tableau-2.0.0-py3-none-any.whl

Configure the Collector

The collector uses a configuration JSON file to manage connection details and credentials. Below is the configuration file. You will need to create this configuration file and place it in a directory accessible to the collector.

kada_tableau_extractor_config.json

Code Block
languagejson
{
    "server_address": "",
    "username": "",
    "password": "",
    "sites": [],
    "db_host": "",
    "db_username": "readonly",
    "db_password": "",
    "db_port": 8060,
    "db_name": "workgroup",
    "meta_only": false,
    "retries": 5,
    "dry_run": false,
    "output_path": "/tmp/output",
    "mask": true,
    "mapping": {}
}

FIELD

FIELD TYPE

DESCRIPTION

EXAMPLE

server_address

string

Tableau server address inclusive of http/https

https://10.1.19.15

username

string

Username to log into tableau api

“tabadmin”

password

string

Password to log into tableau api

sites

list<string>

List of specific sites that you wish to extract, if left as [] it will extract all sites.

[]

db_host

string

This is generally the same as server address less the http/https

“10.1.19.15”

db_username

string

By default the tableau database use is readonly should not need to change this unless you actively manage the database

“readonly”

db_password

list<string>

Password for the database user

db_port

integer

Default is 8060 unless your tableau is configured differently

8060

db_name

string

Default database to use is workgroup

“workgroup”

meta_only

boolean

If for some reason you want to extract meta only set this to true otherwise leave it as false

false

retries

integer

Number of retries that the extractor should hit the API incase of intermittent failures, default is 5

5

dry_run

boolean

By doing a dry run you produce the mapping.json file which is used to populate the mapping field below. It is recommended you do a dry run first to see what databases are available to map.

true

output_path

string

Absolute path to the output location where files are to be written

“/tmp/output”

mask

boolean

To enable masking or not

true

mapping

json

This should be populate with the mapping.json output where each data source name mentioned is mapped to an onboarded K host

Where analytics.adw is the onboarded database in K

Code Block
languagejson
{
"somehost.adw": "analytics.adw"
}

Run the Collector

Run the following command to run the collector
python kada-_tableau-_extractor.py --server http://example.com --username <YOUR ADMIN USER> --password <YOUR PASSWORD> --db_password <YOUR PASSWORD> --db_host=example.compy

Code Block
languagepy
import os
import argparse
from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.tableau import Extractor

get_generic_logger('root') # Set to use the root logger, you can change the context accordingly or define your own logger

_type = 'tableau'
dirname = os.path.dirname(__file__)
filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type))

parser = argparse.ArgumentParser(description='KADA Tableau Extractor.')
parser.add_argument('--config', '-c', dest='config', default=filename, help='Location of the configuration json, default is the config json in the same directory as the script.')
args = parser.parse_args()

start_hwm, end_hwm = get_hwm(_type)

ext = Extractor(**load_config(args.config))
ext.test_connection()
ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm})

publish_hwm(_type, end_hwm)

This can be executed anywhere that has the wheel installed.

It will produce and read a high water mark file from the same directory as the execution called tableau_hwm.txt and produce files according to the configuration JSON.

It will create the a set of files in the directory you set that need to be uploaded to K

Push the files to the KADA Landing Directory

Create the Tableau source in K.

Use “Load from File” option

...

  • Give the source a name - e.g. Tableau Production

  • Add the host name for the Tableau server

  • Click Finish Setup

  • After the source is created. Go to edit the source.

...

  • Record the

...

  • Storage Location

...

Push the files that are generated from the collector to the Landing Directory.Dean Nguyen Should add how the Admin can find the landing path.

...

...

Example orchestrating the Collector using Airflow

The following example is how you can orchestrate the Tableau collector using Airflow and push the files to K hosted on Azure.

Code Block
languagepy
# built-in
import os

# Installed
from airflow.operators.python_operator import PythonOperator
from airflow.models.dag import DAG
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import days_ago
from airflow.utils.task_group import TaskGroup

from plugins.utils.azure_blob_storage import AzureBlobStorage

from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger
from kada_collectors.extractors.tableau import Extractor

# To be configed by the customer.
# Note variables may change if using a different object store.
KADA_SAS_TOKEN = os.getenv("KADA_SAS_TOKEN")
KADA_CONTAINER = ""
KADA_STORAGE_ACCOUNT = ""
KADA_LANDING_PATH = "lz/dbt/Astronomer Snowflaketableau/landing"
KADA_EXTRACTOR_CONFIG = {
    "server_address": "http://tabserver",
    "username": "user",
    "password": "password",
    "sites": [],
    "db_host": "tabserver",
    "db_username": "repo_user",
    "db_password": "repo_password",
    "db_port": 8060,
    "db_name": "workgroup",
    "meta_only": False,
    "retries": 5,
    "dry_run": False,
    "output_path": "/set/to/output/path",
    "mask": True,
    "mapping": {}
}

# To be implemented by the customer. 
# Upload to your landing zone storage.
def upload():
  output = KADA_EXTRACTOR_CONFIG['output_path']
  for filename in os.listdir(output):
      if filename.endswith('.csv'):
        file_to_upload_path = os.path.join(output, filename)

        AzureBlobStorage.upload_file_sas_token(
            client=KADA_SAS_TOKEN,
            storage_account=KADA_STORAGE_ACCOUNT,
            container=KADA_CONTAINER, 
            blob=f'{KADA_LANDING_PATH}/{filename}', 
            local_path=file_to_upload_path
        )

with DAG(dag_id="taskgroup_example", start_date=days_ago(1)) as dag:
  
    # To be implemented by the customer.
    # Retrieve the timestamp from the prior run
    start_hwm = 'YYYY-MM-DD HH:mm:SS'
    end_hwm = 'YYYY-MM-DD HH:mm:SS' # timestamp now
    
    ext = Extractor(**KADA_EXTRACTOR_CONFIG)
    
    start = DummyOperator(task_id="start")

    with TaskGroup("taskgroup_1", tooltip="extract tableau and upload") as extract_upload:
        task_1 = PythonOperator(
            task_id="extract_tableau",
            python_callable=ext.run, 
            op_kwargs={"start_hwm": start_hwm, "end_hwm": end_hwm},
            provide_context=True,
        )
        
        task_2 = PythonOperator(
            task_id="upload_extracts",
            python_callable=upload, 
            op_kwargs={},
            provide_context=True,
        )

        # To be implemented by the customer. 
        # Timestamp needs to be saved for next run
        task_3 = DummyOperator(task_id='save_hwm') 

    end = DummyOperator(task_id='end')

    start >> extract_upload >> end

...

Advanced Usage

If you wish to maintain your own high water mark files else where you can use the above section’s script as a guide on how to call the extractor. The configuration file is simply the keyword arguments in JSON format.

If you are handling external arguments of the runner yourself, you’ll need to consider the following for the run method https://kadaai.atlassian.net/wiki/spaces/DAT/pages/1894318152/Notes+v2.0.0#The-run-method

Code Block
languagepy
from kada_collectors.extractors.tableau import Extractor

kwargs = {my args} # However you choose to construct your args
hwm_kwrgs = {"start_hwm": "end_hwm": } # The hwm values

ext = Extractor(**kwargs)
ext.run(**hwm_kwrgs)