Scroll ignore | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||
About Collectors
Insert excerpt | ||||||
---|---|---|---|---|---|---|
|
...
Pre-requisites
Python 3.6 - 3.10
Access to the KADA Collector repository that contains the BigQuery whl
The repository is currently hosted in KADA’s Azure Blob Storage. You will be given a SAS token to access the repository. Reach out to KADA Support (support@kada.ai) if you do not have access.
Download the BigQuery whl (e.g. kada_collectors_extractors_bigquery-#.#.#-py3-none-any.whl)
Access to K landing directory
Access to BiqQuery
...
Collector Server Minimum Requirements
Insert excerpt | ||||||||
---|---|---|---|---|---|---|---|---|
|
Hevo Requirements
Access toHevo
...
Step 1: Create Hevo API key/Secret
Info |
---|
This step is performed by the Google Cloud Admin |
Create a Service Account by going to the Google Cloud Admin or clicking on this link
Give the Service Account a name (e.g. KADA BQ Integration)
Select the Projects that include the BigQuery instance(s) that you want to catalog
Click Save
Create a Service Token
Click on the Service Account
Select the Keys tab. Click on Create new key
Select the JSON option. After clicking ‘CREATE’, the JSON file will automatically download to your device. Provide this to the user(s) that will complete the next steps
Add permission grants on the Service Account by going to IAM page or clicking on this link
Click on ADD
Add the Service Account to the ‘New principals’ field.
Grant the following roles this principal as shown in the following screenshot.
BigQuery Job User
BigQuery Metadata Viewer
BigQuery Read Session User
BigQuery Resource Viewer
Click SAVE
Step 2: Create the Source in K
...
Hevo Admin. Hevo documentation for creating an API Key is here https://api-docs.hevodata.com/reference/building-your-first-api |
Login to Hevo
Click on your Avatar in the top right hand corner and select Account in the drop down menu
Select API Keys in the side panel and click Generate a New API Key.
Copy the Access Key and Secret Key
...
Step 2: Create the Source in K
Go to Settings, Select Sources and click Add Source
Select Hevo as the Source Type
Select “Load from File system” option
Give the source a Name - e.g. BigQuery Hevo Production
Add the Host name for the BigQuery Hevo Server
Click Finish Setup
...
Step 3: Getting Access to the Source Landing Directory
...
Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.
You can download the latest Core Library and whl via Platform Settings → Sources → Download Collectors
...
Run the following command to install the collector
Code Block |
---|
pip install kada_collectors_extractors_bigquery-3.0.0-py3-<version>-none-any.whl |
You will also need to install the common library kada_collectors_lib -1.0.1 for this collector to function properly.
Code Block |
---|
pip install kada_collectors_lib-1.0.1-py3<version>-none-any.whl |
Info |
Under the covers this uses the BigQuery Client API and may have OS dependencies see https://cloud.google.com/bigquery/docs/reference/libraries |
...
Step 5: Configure the Collector
The collector requires a set of parameters to connect to and extract metadata from BigQueryHevo.
FIELD | FIELD TYPE | DESCRIPTION | EXAMPLE |
---|
regions
list<string>
api_key | string | API Key for Hevo |
|
api_secret | string | Secret for the API Key |
|
region | string | Region prefix as per https:// |
“us”
projects
list<string>
List of project ids to inspect across the regions specified
“kada-data”
host
string
This is the host that was onboarded in K for BigQuery
“bigquery”
json_credentials
JSON
au | |||||
output_path | string | Absolute path to the output location where files are to be written | “/tmp/output” | ||
mask | boolean | To enable masking or not | true | ||
timeout | integer | Timeout in seconds allowed against the Fivetran APIs | 20 | ||
mapping | JSON | Mapping file of data source names against the onboarded host and database name in K | Assuming I have a “myDSN” data source name in powerbi, I’ll map it to host “myhost” and database “mydatabase” onboarded in K, snowflake type references are handled automatically
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
output_path
string
mask
boolean
To enable masking or not
true
“/tmp/output”
compress | boolean | To gzip the output or not | true |
These parameters can be added directly into the run or you can use pass the parameters in via a JSON file.
KADA provides an out of the box script that reads a configuration JSON file and runs the extractor. Below is the configuration file.
kada_bigqueryhevo_extractor_config.json
Code Block |
---|
{ "regionsapi_key": []"", "projectsapi_secret": []"", "hostregion": "", "json_credentialsoutput_path": "/tmp/output", "mask": true, "timeout": 20, "mapping": { "myDSN": {}, "output_pathhost": "/tmp/outputmyhost", "mask": true "database": "mydatabase" } }, "compress": true } |
...
Step 6: Run the Collector
...
This can be executed in any python environment where the whl has been installed. It will produce and read a high water mark file from the same directory as the execution called bigqueryhevo_hwm.txt and produce files according to the configuration JSON.
This is the wrapper script: kada_bigqueryhevo_extractor.py
Code Block |
---|
import os import argparse from kada_collectors.extractors.utils import load_config, get_hwm, publish_hwm, get_generic_logger from kada_collectors.extractors.athenahevo import Extractor get_generic_logger('root') # Set to use the root logger, you can change the context accordingly or define your own logger _type = 'bigqueryhevo' dirname = os.path.dirname(__file__) filename = os.path.join(dirname, 'kada_{}_extractor_config.json'.format(_type)) parser = argparse.ArgumentParser(description='KADA BigQueryHevo Extractor.') parser.add_argument('--config', '-c', dest='config', default=filename, help='Location of the configuration json, default is the config json in the same directory as the script.') aparser.add_argument('--name', '-n', dest='name', default=_type, help='Name of the collector instance.') args = parser.parse_args() start_hwm, end_hwm = get_hwm(_typeargs.name) ext = Extractor(**load_config(args.config)) ext.test_connection() ext.run(**{"start_hwm": start_hwm, "end_hwm": end_hwm}) publish_hwm(_type, end_hwm) |
...
Code Block |
---|
from kada_collectors.extractors.bigqueryhevo import Extractor kwargs = {my args} # However you choose to construct your args hwm_kwrgs = {"start_hwm": "end_hwm": } # The hwm values ext = Extractor(**kwargs) ext.run(**hwm_kwrgs) |
...
Code Block |
---|
class Extractor(regionsapi_key: liststr = []None, projectsapi_secret: liststr = []None, hostregion: str = 'bigquery'None, \ json_credentialsmapping: dict = {}, timeout: int = 10, output_path: str = './output', \ mask: bool = False, \ compress: bool = False) -> None |
...
api_key: The API Key for the registered application for access to Hevo APIs
api_secret: The API secret for the registered application for access to Hevo APIs
region: The region prefix for your Hevo
mapping: Dict of DNS to database and hostnames
timeout: Timeout for the API call
output_path: full or relative path to where the outputs should go
mask: To mask the META/DATABASE_LOG files or not
compress: To gzip output files or not
...
A high water mark file is created in the same directory as the execution called bigqueryhevo_hwm.txt and produce files according to the configuration JSON. This file is only produced if you call the publish_hwm method.
...