Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Read access to the DBC and PDCRINFO schemas.

  2. Specifically these tables:

    1. PDCRINFO.DBQLogTbl Otherwise DBC.DBQLogTbl

    2. PDCRINFO.DBQLSqlTbl Otherwise DBC.DBQLSqlTbl

    3. DBC.TABLESV

    4. DBC.TABLESV

    5. DBC.INDICESV

    6. DBC.ALL_RI_CHILDRENV

1.2. SQLServer 2012+

Uses ODBC so an update is required to the cerebrum-odbc-ini.yaml file

Code Block
apiVersion: v1
data:
  odbc.ini: |-
    [Kada Teradata Extractor]
    Description = Teradata Connection
    Driver      = /opt/teradata/client/ODBC_64/lib/tdataodbc_sb64.so
    DBCName     = ec2-54-66-46-3.ap-southeast-2.compute.amazonaws.com <TERADATA SERVER DSN>

    [Kada SQLServer Extractor]
    Description = SQLServer Connection
    Driver      = FreeTDS
    Server      = sql1 <UPDATE WITH YOUR SERVER DNS>
    Port        = 1433
    TDS_Version = 8.0
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: cerebrum-odbc-ini

Update the Server, Port according to your SQLServer, if there are multiple sources, then create a new DSN Entry for each one making sure to use the same format as in the Kada SQLServer Extractor populate example. Do not change the Driver or TDS_Version values.

1.3. Oracle Cloud and Oracle Analytics

Required an oracle wallet and the following. items to be updated in the cerebrum-oic.yaml file.

  1. cwallet.sso → Binary Text

  2. ewallet.p12 → Binary Text

  3. tnsnames.ora → Text

NB: sqlnet.ora if updated must have DIRECTORY="/etc/oracle_config"

Code Block
apiVersion: v1
binaryData:
  cwallet.sso: <USERs CWALLET>
  ewallet.p12: <USER EWALLET>
data:
  sqlnet.ora: |-
    WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/etc/oracle_config")))
    SSL_SERVER_DN_MATCH=yes
  tnsnames.ora: ""
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: cerebrum-oic

To generate the binary or the text replacement, simply run to get the output in the console

Code Block
kubectl create configmap test --from-file=<path to file> -o yaml --dry-run

You can use the output to replace the specific data/binaryData in each section of the cerebrum-oci.yaml file.

Alternatively if you have all the files, add each file as a --from-file argument to generate the whole config file again

Code Block
kubectl create configmap cerebrum-oic --from-file=../image/conf/cwallet.sso --from-file=../image/conf/ewallet.p12 --from-file=../image/conf/sqlnet.ora --from-file=../image/conf/tnsnames.ora  -o yaml --dry-run > cerebrum-oic.yaml

Permissions

  1. Read access to the SYS.DBA_* tables

  2. Specifically the following tables:

    1. SYS.DBA_PROCEDURES

    2. SYS.DBA_VIEWS

    3. SYS_DBA_MVIEWS

    4. SYS.DBA_CONSTRAINTS

    5. SYS.DBA_CONS_COLUMNS

    6. SYS.DBA_TAB_COLUMNS

    7. SYS.dba_hist_active_sess_history

    8. SYS.dba_hist_snapshot

    9. SYS.dba_users

    10. SYS.dba_hist_sqltext

1.4. Snowflake

No additional configuration is required. Snowflake uses a python native driver.

Permissions

  1. Read access to the SNOWFLAKE.ACCOUNT_USAGE schema

  2. User must have access to role: ACCOUNTADMIN

    1. Alternatively grant to other role. https://docs.snowflake.com/en/sql-reference/account-usage.html#enabling-account-usage-for-other-roles

1.5. Tableau

Permissions

  1. Read user to workgroup Tableau postgres database

  2. Tableau enabled for Metadata API

    1. https://help.tableau.com/current/api/metadata_api/en-us/docs/meta_api_start.html#enable-the-tableau-metadata-api-for-tableau-server

  3. Tableau Server enabled for respository access

    1. https://help.tableau.com/current/server/en-us/perf_collect_server_repo.htm

  4. Create a Tableau Server with the following access roles: Site Administrator Creator or Server Administrator

1.6 SQLServer 2012+

SqlServer Extended events need to be setup to capture query log data.

Template for KADA’s extended events. Note that this will require some tuning depending on how much activity and the types of queries occurring in your SQLServer environment.

Code Block
--Query To Create Extended Events Session
CREATE EVENT SESSION [KADA] ON SERVER ADD EVENT sqlserver.sp_statement_completed (
	ACTION(package0.collect_system_time, package0.event_sequence, sqlos.task_time, sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.nt_username, sqlserver.query_hash, sqlserver.server_instance_name, sqlserver.server_principal_name, sqlserver.server_principal_sid, sqlserver.session_id, sqlserver.session_nt_username, sqlserver.transaction_id, sqlserver.username) WHERE (
		(
			[statement] LIKE '%CREATE %'
			OR [statement] LIKE '%DROP %'
			OR [statement] LIKE '%MERGE %'
			OR [statement] LIKE '%FROM %'
			)
		AND [sqlserver].[server_principal_name] <> N'USERS_TO_EXCLUDE'
		AND [sqlserver].[is_system] = (0)
		AND NOT [statement] LIKE 'Insert into % Values %'
		AND [sqlserver].[Query_hash] <> (0)
		)
	), ADD EVENT sqlserver.sql_statement_completed (
	SET collect_statement = (1) ACTION(package0.collect_system_time, package0.event_sequence, sqlos.task_time, sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.nt_username, sqlserver.query_hash, sqlserver.server_instance_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.session_nt_username, sqlserver.transaction_id, sqlserver.username) WHERE (
		(
			[statement] LIKE '%CREATE %'
			OR [statement] LIKE '%DROP %'
			OR [statement] LIKE '%MERGE %'
			OR [statement] LIKE '%FROM %'
			)
		AND [sqlserver].[server_principal_name] <> N'N'USERS_TO_EXCLUDE'
		AND [sqlserver].[is_system] = (0)
		AND NOT [statement] LIKE 'Insert into % Values %'
		AND [sqlserver].[Query_hash] <> (0)
		)
	) ADD TARGET package0.event_file (SET filename = N'G:\extended events\Extendedevents.xel', max_file_size = (20), max_rollover_files = (100))
	WITH (MAX_MEMORY = 4096 KB, EVENT_RETENTION_MODE = ALLOW_MULTIPLE_EVENT_LOSS, MAX_DISPATCH_LATENCY = 30 SECONDS, MAX_EVENT_SIZE = 0 KB, MEMORY_PARTITION_MODE = NONE, TRACK_CAUSALITY = ON, STARTUP_STATE = ON)
GO

-- Check if the session is dropping events and see other data about the session
-- https://sqlperformance.com/2019/10/extended-events/understanding-event-loss-with-extended-events
SELECT
   s.name, 
   s.total_regular_buffers,
   s.regular_buffer_size,
   s.total_large_buffers,
   s.large_buffer_size,
   s.dropped_event_count,
   s.dropped_buffer_count,
   s.largest_event_dropped_size
FROM sys.dm_xe_sessions AS s;

-- Also check log growth rate. Apply filters to remove noise.
-- some filters:
-- [sqlserver].[server_principal_name] = N'name of principal'
-- [sqlserver].[is_system] = (0)
-- [sqlserver].[client_app_name] = N'name of app'

-- NB grouping by XML is not supported in SQLServer. Need to use an external tool for this.

Kubernetes setup

Update cerebrum-odbc-ini.yaml set the server.

Code Block
[Kada SQLServer Extractor]
Description = SQLServer Connection
Driver      = FreeTDS
Server      = <SET TO YOUR SERVER ADDRESS>
Port        = 1433
TDS_Version = 8.0

Permissions

...

Read access to the information_schema per database

...

Permission to create extended events

...

Permissions

  1. Read access to the information_schema per database

  2. Permission to create extended events

  3. Permission to read extended events log file.

Log Capture

SqlServer Extended events need to be setup to capture query log data.

Here is a template for KADA’s extended events. Note that this will require some tuning depending on how much activity and the types of queries occurring in your SQLServer environment.

Code Block
--Query To Create Extended Events Session
CREATE EVENT SESSION [KADA] ON SERVER ADD EVENT sqlserver.sp_statement_completed (
	ACTION(package0.collect_system_time, package0.event_sequence, sqlos.task_time, sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.nt_username, sqlserver.query_hash, sqlserver.server_instance_name, sqlserver.server_principal_name, sqlserver.server_principal_sid, sqlserver.session_id, sqlserver.session_nt_username, sqlserver.transaction_id, sqlserver.username) WHERE (
		(
			[statement] LIKE '%CREATE %'
			OR [statement] LIKE '%DROP %'
			OR [statement] LIKE '%MERGE %'
			OR [statement] LIKE '%FROM %'
			)
		AND [sqlserver].[server_principal_name] <> N'USERS_TO_EXCLUDE'
		AND [sqlserver].[is_system] = (0)
		AND NOT [statement] LIKE 'Insert into % Values %'
		AND [sqlserver].[Query_hash] <> (0)
		)
	), ADD EVENT sqlserver.sql_statement_completed (
	SET collect_statement = (1) ACTION(package0.collect_system_time, package0.event_sequence, sqlos.task_time, sqlserver.client_app_name, sqlserver.client_hostname, sqlserver.database_id, sqlserver.database_name, sqlserver.nt_username, sqlserver.query_hash, sqlserver.server_instance_name, sqlserver.server_principal_name, sqlserver.session_id, sqlserver.session_nt_username, sqlserver.transaction_id, sqlserver.username) WHERE (
		(
			[statement] LIKE '%CREATE %'
			OR [statement] LIKE '%DROP %'
			OR [statement] LIKE '%MERGE %'
			OR [statement] LIKE '%FROM %'
			)
		AND [sqlserver].[server_principal_name] <> N'N'USERS_TO_EXCLUDE'
		AND [sqlserver].[is_system] = (0)
		AND NOT [statement] LIKE 'Insert into % Values %'
		AND [sqlserver].[Query_hash] <> (0)
		)
	) ADD TARGET package0.event_file (SET filename = N'G:\extended events\Extendedevents.xel', max_file_size = (20), max_rollover_files = (100))
	WITH (MAX_MEMORY = 4096 KB, EVENT_RETENTION_MODE = ALLOW_MULTIPLE_EVENT_LOSS, MAX_DISPATCH_LATENCY = 30 SECONDS, MAX_EVENT_SIZE = 0 KB, MEMORY_PARTITION_MODE = NONE, TRACK_CAUSALITY = ON, STARTUP_STATE = ON)
GO

-- Check if the session is dropping events and see other data about the session
-- https://sqlperformance.com/2019/10/extended-events/understanding-event-loss-with-extended-events
SELECT
   s.name, 
   s.total_regular_buffers,
   s.regular_buffer_size,
   s.total_large_buffers,
   s.large_buffer_size,
   s.dropped_event_count,
   s.dropped_buffer_count,
   s.largest_event_dropped_size
FROM sys.dm_xe_sessions AS s;

-- Also check log growth rate. Apply filters to remove noise.
-- some filters:
-- [sqlserver].[server_principal_name] = N'name of principal'
-- [sqlserver].[is_system] = (0)
-- [sqlserver].[client_app_name] = N'name of app'

-- NB grouping by XML is not supported in SQLServer. Need to use an external tool for this.

1.3. Oracle Cloud and Oracle Analytics

Required an oracle wallet and the following. items to be updated in the cerebrum-oic.yaml file.

  1. cwallet.sso → Binary Text

  2. ewallet.p12 → Binary Text

  3. tnsnames.ora → Text

NB: sqlnet.ora if updated must have DIRECTORY="/etc/oracle_config"

Code Block
apiVersion: v1
binaryData:
  cwallet.sso: <USERs CWALLET>
  ewallet.p12: <USER EWALLET>
data:
  sqlnet.ora: |-
    WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="/etc/oracle_config")))
    SSL_SERVER_DN_MATCH=yes
  tnsnames.ora: ""
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: cerebrum-oic

To generate the binary or the text replacement, simply run to get the output in the console

Code Block
kubectl create configmap test --from-file=<path to file> -o yaml --dry-run

You can use the output to replace the specific data/binaryData in each section of the cerebrum-oci.yaml file.

Alternatively if you have all the files, add each file as a --from-file argument to generate the whole config file again

Code Block
kubectl create configmap cerebrum-oic --from-file=../image/conf/cwallet.sso --from-file=../image/conf/ewallet.p12 --from-file=../image/conf/sqlnet.ora --from-file=../image/conf/tnsnames.ora  -o yaml --dry-run > cerebrum-oic.yaml

Permissions

  1. Read access to the SYS.DBA_* tables

  2. Specifically the following tables:

    1. SYS.DBA_PROCEDURES

    2. SYS.DBA_VIEWS

    3. SYS_DBA_MVIEWS

    4. SYS.DBA_CONSTRAINTS

    5. SYS.DBA_CONS_COLUMNS

    6. SYS.DBA_TAB_COLUMNS

    7. SYS.dba_hist_active_sess_history

    8. SYS.dba_hist_snapshot

    9. SYS.dba_users

    10. SYS.dba_hist_sqltext

1.4. Snowflake

No additional configuration is required. Snowflake uses a python native driver.

Permissions

  1. Read access to the SNOWFLAKE.ACCOUNT_USAGE schema

  2. User must have access to role: ACCOUNTADMIN

    1. Alternatively grant to other role. https://docs.snowflake.com/en/sql-reference/account-usage.html#enabling-account-usage-for-other-roles

1.5. Tableau

Permissions

  1. Read user to workgroup Tableau postgres database

  2. Tableau enabled for Metadata API

    1. https://help.tableau.com/current/api/metadata_api/en-us/docs/meta_api_start.html#enable-the-tableau-metadata-api-for-tableau-server

  3. Tableau Server enabled for respository access

    1. https://help.tableau.com/current/server/en-us/perf_collect_server_repo.htm

  4. Create a Tableau Server with the following access roles: Site Administrator Creator or Server Administrator

1.6 SQLServer 2012+

1.7 Informatica (coming soon)

...