Document toolboxDocument toolbox

Setup: Deploying K

Applicable to customer on-premise/cloud deployments

 

Setup Infrastructure

Kubernetes deployment

  1. Platform:
    Setup a Kubernetes on any cloud provider (AWS, Azure, Google Cloud) or on-premise solution (e.g. OpenShift)

  2. Environment Sizing for Deployment of KADA into your infrastructure:

    1. Small Environments ( <1 Million objects)

      1. 4 nodes each node 4CPU, 16Gb Memory, PV Storage Class SSD disk any IOPS

    2. Large Environments (more than 1 Million objects)

      1. 5 nodes

        1. 4 nodes each node 4CPU, 16Gb Memory, PV Class storage SSD disk any IOPS

        2. 1 node 8CPU, 32Gb Memory, for PV Class storage SSD with 1100 IOPS minimum.

    3. For very complex environments 10M+ objects or large volume of historical data, infrastructure requirements can scale out according to data volumes.

  3. Storage:

    1. Setup an object store such as AWS s3, Azure Blob etc.

      1. Setup: s3 bucket setup

    2. Minimum 200GB storage, to be mounted into Persistent volumes in the Kubernetes cluster. PV Class definitions need to be configured to meet the minimum IOPS requirements above.

    3. Where the organisation defines their own PV definitions eg OpenShift, set the Reclaim Policy is set to Retain. This is important to ensure there is no data lost during prolonged outage at the Kubernetes layer.

  4. Networking:

    1. Firewall rules may be required to enable access to HTTPS (443)

    2. You may choose to use your own Kubernetes ingress services or use the one provided by KADA's configuration scripts.

Docker deployment (not recommended for production environments)

  1. This setup requires a single machine with a minimum spec of 16CPU, 64GB MEM, 200GB (minimum) storage

  2. Install docker: Install

  3. Install docker compose: Install

Configuring Access to KADA Image Repository

The KADA installation will require access to the KADA Image repository.

The Kubernetes or Docker environment will need internet access to this repository to install the platform.

If your environment is air gap please advise the KADA Team and we will arrange an alternative method for loading imaging into your environment.

KADA will provide customers with a unique kada-client-[unique_customer_code]-key.json to access the repository.

Kubernetes deployment

To setup the access key in your Kubernetes environment run the following.

kubectl create secret docker-registry kada-image-credentials \ --docker-server=https://asia.gcr.io \ --docker-username=_json_key \ --docker-email=kada-client-[unique_customer_code]@kada-external.iam.gserviceaccount.com \ --docker-password="$(cat kada-client-[unique_customer_code]-key.json)" kubectl patch serviceaccount <REPLACE WITH THE SERVICE ACCOUNT NAME OR default> \ -p "{\"imagePullSecrets\": [{\"name\": \"kada-image-credentials\"}]}" # Run the following to test connectivity docker pull busybox:1.28 docker pull asia.gcr.io/kada-external/postgres:1.7.0-pg11

Docker deployment

To setup the access key in your Docker environment run the following.

docker login -u _json_key --password-stdin https://asia.gcr.io < /tmp/kada-client-[code]-key.json # Run the following to test connectivity docker pull busybox:1.28 docker pull asia.gcr.io/kada-external/postgres:1.7.0-pg11

KADA Platform Installation

  1. KADA is packaged as a set of configuration files.

  2. Download the latest package.

Kubernetes deployments

  1. In keycloak/k8s/keycloak-kada-realm.yaml replace DOMAIN_URL with your base url of your installation. Eg https://example.com

  2. Platform credentials for internal services can be updated from their default values

    1. Edit postgres/k8s/credentials.yaml to set your own password

      POSTGRES_PASS=
    2. Edit keycloak/k8s/keycloak-credentials.yaml to set your own password

  3. Generate CA Certificates

    1. Generate CA Certs base on the domain name of the host.
      Once generate run the following command to upload to certs into Kubernetes

    2. If you are using your own Kubernetes ingress service. The service needs to map the ports as per cortex/k8s/ingress-service.yaml. Make sure certs have been added to your ingress service.

    3. If you are using the KADA ingress services update cortex/k8s/ingress-service.yaml and set the following

  4. Deploy the Kubernetes config to start the platform

    1. Upload config

    2. Check environment is up

    3. Deploy ingress-service (if not using your own)

Docker deployment

  1. Edit the following kada_docker_compose.env and set the following value

  2. In conf/kada-realm.json replace DOMAIN_URL with your base url of your installation. Eg https://example.com

  3. Generate CA Certs base on the domain name of the host. In conf/ rename and replace the cortex.crt and cortex.key with your generated CA Certificates.

  4. Deploy the environment

KADA Platform Configuration

  1. Platform Settings
    On the bottom left of screen click the GEAR icon. And select Platform Settings.

    Then setup the following properties depending on your deployment setup

  2. Integrating sources to KADA
    KADA needs to be configured for each source that you want to integrate. Setup can be configure via the KADA front end. See [M - Done] How to: Onboard a new source

  3. KADA Platform Initial load

    1. Setup the following Platform Setting values for initial load

    2. KADA provides a built in Batch manager for triggering the loading of sources.

    3. See[M - Done] How to: Onboard a new source | 4. Manually Triggering Source loads

    4. Once the sources have been loaded. Manually trigger the following platform jobs. See [M - Done] How to: Manually run a data load from a source | Manually triggering a Platform job
      1. GATHER_METRICS_AND_STATS
      2. POST_PROCESS_QUERIES
      3. DAILY

  4. Schedule sources to load.
    KADA provided a scheduler to periodically load the source you have configured.
    Setup the following Platform Setting value to enable the scheduler to run.

    Each Source can now be scheduled to run. See [M - Done] How to: Onboard a new source | 3. Scheduling a Source

Upgrading KADA

KADA generally releases new updates each month. See our Release versions to see what the latest version available is.

To check your version see [Not migrated - Redundant] How to: Check the version of K platform

If a new version is available use the following steps to upgrade

 

To update your platform perform the following steps. Then follow any manual steps outlined in the release notes.

Kubernetes deployments

Docker deployments

KADA Integrations

1. Updating Kubernetes Configs for Source Onboarding

Some sources in KADA require additional configuration to establish connectivity. This section details the additional configuration steps per integration source.

Queries to extract from a source may need to be altered for a customer's deployment. These can be edited in the cerebrum-extract-scripts.yaml file. Each extract script is prefixed with the relevant vendor source name.

After editing any of the config yaml files, upload the edited yaml file and restart the cerebrum services for the new configurations to take effect.

1.1. Teradata

Uses ODBC so an update is required to the cerebrum-odbc-ini.yaml file

Update the DBCName to the server for your Teradata, if there are multiple sources, then create a new DSN Entry for each one making sure to use the same format as in the Kada Teradata Extractor populate example. Do not change the Driver path.

Permissions:

  1. Read access to the DBC and PDCRINFO schemas.

  2. Specifically these tables:

    1. PDCRINFO.DBQLogTbl Otherwise DBC.DBQLogTbl

    2. PDCRINFO.DBQLSqlTbl Otherwise DBC.DBQLSqlTbl

    3. DBC.TABLESV

    4. DBC.TABLESV

    5. DBC.INDICESV

    6. DBC.ALL_RI_CHILDRENV

1.2. SQLServer 2012+

Uses ODBC so an update is required to the cerebrum-odbc-ini.yaml file

Update the Server, Port according to your SQLServer, if there are multiple sources, then create a new DSN Entry for each one making sure to use the same format as in the Kada SQLServer Extractor populate example. Do not change the Driver or TDS_Version values.

Permissions

  1. Read access to the information_schema per database

  2. Permission to create extended events

  3. Permission to read extended events log file.

Log Capture

SqlServer Extended events need to be setup to capture query log data.

Here is a template for KADA’s extended events. Note that this will require some tuning depending on how much activity and the types of queries occurring in your SQLServer environment.

 

1.3. Oracle 11g+, Oracle Cloud and Oracle Analytics

Required an oracle wallet and the following. items to be updated in the cerebrum-oic.yaml file.

  1. cwallet.sso → Binary Text

  2. ewallet.p12 → Binary Text

  3. tnsnames.ora → Text

NB: sqlnet.ora if updated must have DIRECTORY="/etc/oracle_config"

To generate the binary or the text replacement, simply run to get the output in the console

You can use the output to replace the specific data/binaryData in each section of the cerebrum-oci.yaml file.

Alternatively if you have all the files, add each file as a --from-file argument to generate the whole config file again

Permissions

  1. Read access to the SYS.DBA_* tables

  2. Specifically the following tables:

    1. SYS.DBA_PROCEDURES

    2. SYS.DBA_VIEWS

    3. SYS_DBA_MVIEWS

    4. SYS.DBA_CONSTRAINTS

    5. SYS.DBA_CONS_COLUMNS

    6. SYS.DBA_TAB_COLUMNS

    7. SYS.dba_hist_active_sess_history

    8. SYS.dba_hist_snapshot

    9. SYS.dba_users

    10. SYS.dba_hist_sqltext

1.4. Snowflake

No additional configuration is required. Snowflake uses a python native driver.

Permissions

  1. Read access to the SNOWFLAKE.ACCOUNT_USAGE schema

  2. User must have access to role: ACCOUNTADMIN

    1. Alternatively grant to other role. Account Usage | Snowflake Documentation

1.5. Tableau

Permissions

  1. Read user to workgroup Tableau postgres database

  2. Tableau enabled for Metadata API

    1. Get Started

  3. Tableau Server enabled for respository access

    1. Collect Data with the Tableau Server Repository

  4. Create a Tableau Server with the following access roles: Site Administrator Creator or Server Administrator

1.6 Informatica (coming soon)

Permissions

Read access to Informatica all repository tables.

1.7 DBT (coming soon)

Permissions

Read access a location that contains the manifest.json and catalogue.json file for each dbt project.