BigQuery (via Collector method) - v3.0.0
About Collectors
Pre-requisites
Collector Server Minimum Requirements
BigQuery Requirements
Access to BiqQuery
Step 1: Establish BigQuery Access
This step is performed by the Google Cloud Admin
Create a Service Account by going to the Google Cloud Admin or clicking on this link
Give the Service Account a name (e.g. KADA BQ Integration)
Select the Projects that include the BigQuery instance(s) that you want to catalog
Click Save
Create a Service Token
Click on the Service Account
Select the Keys tab. Click on Create new key
Select the JSON option. After clicking ‘CREATE’, the JSON file will automatically download to your device. Provide this to the user(s) that will complete the next steps
Add permission grants on the Service Account by going to IAM page or clicking on this link
Click on ADD
Add the Service Account to the ‘New principals’ field.
Grant the following roles this principal as shown in the following screenshot.
BigQuery Job User
BigQuery Metadata Viewer
BigQuery Read Session User
BigQuery Resource Viewer
Click SAVE
Step 2: Create the Source in K
Create a BigQuery source in K
Go to Settings, Select Sources and click Add Source
Select “Load from File system” option
Give the source a Name - e.g. BigQuery Production
Add the Host name for the BigQuery Server
Click Finish Setup
Step 3: Getting Access to the Source Landing Directory
Step 4: Install the Collector
It is recommended to use a python environment such as pyenv or pipenv if you are not intending to install this package at the system level.
Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.
You can download the Latest Core Library and whl via Platform Settings → Sources → Download Collectors
Run the following command to install the collector
pip install kada_collectors_extractors_<version>-none-any.whl
You will also need to install the common library kada_collectors_lib for this collector to function properly.
pip install kada_collectors_lib-<version>-none-any.whl
Under the covers this uses the BigQuery Client API and may have OS dependencies see https://cloud.google.com/bigquery/docs/reference/libraries
Step 5: Configure the Collector
The collector requires a set of parameters to connect to and extract metadata from BigQuery
FIELD | FIELD TYPE | DESCRIPTION | EXAMPLE |
---|---|---|---|
regions | list<string> | List of valid regions to inspect against for data, see https://cloud.google.com/bigquery/docs/locations for list of valid regions | “us” |
projects | list<string> | List of project ids to inspect across the regions specified | “kada-data” |
host | string | This is the host that was onboarded in K for BigQuery | “bigquery” |
json_credentials | JSON | See permissions section on how to download the credentials json to assign to this value | {
"type": "service_account",
"project_id": "kada-data",
"private_key_id": "",
"private_key": "",
"client_email": "kada.iam.gserviceaccount.com",
"client_id": "1234",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/kada.iam.gserviceaccount.com"
}
|
output_path | string | Absolute path to the output location where files are to be written | “/tmp/output” |
mask | boolean | To enable masking or not | true |
compress | boolean | To gzip the output or not | true |
These parameters can be added directly into the run or you can use pass the parameters in via a JSON file.
KADA provides an out of the box script that reads a configuration JSON file and runs the extractor. Below is the configuration file.
kada_bigquery_extractor_config.json
Step 6: Run the Collector
The following code is an example of how to run the extractor. You may need to uplift this code to meet any code standards at your organisation.
This can be executed in any python environment where the whl has been installed. It will produce and read a high water mark file from the same directory as the execution called bigquery_hwm.txt and produce files according to the configuration JSON.
This is the wrapper script: kada_bigquery_extractor.py
Advance options:
If you wish to maintain your own high water mark files elsewhere you can use the above section’s script as a guide on how to call the extractor. The configuration file is simply the keyword arguments in JSON format. Refer to this document for more information Collector Integration General Notes | Storing HWM in another location
If you are handling external arguments of the runner yourself, you’ll need to consider additional items for the run method. Refer to this document for more information Collector Integration General Notes | The run method
regions: The list of regions specified by user to extract
projects: The list of projects specified by user to extract
host: The host value onboarded in K
json_credentials: The json credentials for connection to BQ
sql: The list of SQL queries that will be executed by the program
output_path: full or relative path to where the outputs should go
mask: To mask the META/DATABASE_LOG files or not
compress: To gzip output files or not
Step 7: Check the Collector Outputs
K Extracts
A set of files (eg metadata, databaselog, linkages, events etc) will be generated. These files will appear in the output_path directory you set in the configuration details
High Water Mark File
A high water mark file is created in the same directory as the execution called bigquery_hwm.txt and produce files according to the configuration JSON. This file is only produced if you call the publish_hwm method.
If you want prefer file managed hwm, you can edit the location of the hwn by following these instructions Collector Integration General Notes | Storing High Water Marks (HWM)
Step 8: Push the Extracts to K
Once the files have been validated, you can push the files to the K landing directory.
You can use Azure Storage Explorer if you want to initially do this manually. You can push the files using python as well (see Airflow example below)
Example: Using Airflow to orchestrate the Extract and Push to K