Athena (via Collector method) - v3.0.0
About Collectors
Pre-Requisites
Collector Server Minimum Requirements
Athena Requirements
Access to Athena
Step 1: Establish Athena Access
It is advised you create a new Role and a separate s3 bucket for the service user provided to KADA and have a policy that allows the below, see Identity and access management in Athena - Amazon Athena
The service user/account/role will require permissions to the following
Execute queries against Athena with access to the INFORMATION_SCHEMA in particular the following tables:
information_schema.views
information_schema.tables
information_schema.columns
Executing queries in Athena requires an s3 bucket to temporary store results.
The policy must also allow Read Write Listing access to objects within that bucket, conversely, the bucket must also have policy to allow to do the same.Call the following Athena APIs
list_databases
list_table_metadata
list_query_executions
list_work_groups
batch_get_query_executions
start_query_execution
get_query_execution
The service user/account/role will need permissions to access all workgroups to be able to extract all data, if you omit workgroups, that information will not be extracted and you may not see the complete picture in K.
See IAM policies for accessing workgroups - Amazon Athena on how to add policy entries to have fine grain control at the workgroup level. Note that the extractor runs queries on Athena, If you do choose to restrict workgroup access, ensure that Query based actions (e.g. StartQueryExecution) are allowed for the workgroup the service user/account/role is associated to.
Note that user usage will be associated to the workgroup level rather than individual users, these workgroups are published as users in K in the form “athena_workgroup_<name>”
Example Role Policy to allow Athena Access with least privileges for actions, this example allows the ACCOUNT ARN to assume the role. Note the variables ATHENA RESULTS BUCKET NAME. You may also choose to just assign the policy directly to a new user and use that user without assuming roles. In the scenario you do wish to assume a role, please note down the role ARN to be used when onbaording/extracting
AWSTemplateFormatVersion: "2010-09-09"
Description: 'AWS IAM Role - Athena and Cloudtrail Access to KADA'
Resources:
KadaAthenaRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: "KadaAthenaRole"
MaxSessionDuration: 43200
Path: "/"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
AWS: "[ACCOUNT ARN]"
Action: "sts:AssumeRole"
KadaAthenaPolicy:
Type: 'AWS::IAM::Policy'
Properties:
PolicyName: root
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- athena:BatchGetQueryExecution
- athena:GetQueryExecution
- athena:GetQueryResults
- athena:GetQueryResultsStream
- athena:ListQueryExecutions
- athena:StartQueryExecution
- athena:ListWorkGroups
- athena:ListDataCatalogs
- athena:ListDatabases
- athena:ListTableMetadata
Resource: '*'
- Effect: Allow
Action:
- s3:GetBucketLocation
- s3:GetObject
- s3:ListBucket
- s3:ListBucketMultipartUploads
- s3:ListMultipartUploadParts
- s3:AbortMultipartUpload
- s3:PutObject
- s3:PutBucketPublicAccessBlock
- s3:DeleteObject
Resource:
- arn:aws:s3:::[ATHENA RESULTS BUCKET NAME]
Roles:
- !Ref KadaAthenaRole
Step 2: Create the Source in K
Create an Athena source in K
Go to Settings, Select Sources and click Add Source
Select “Load from File system” option
Give the source a Name - e.g. Athena Production
Add the Host name for the Athena Server
Click Finish Setup
Step 3: Getting Access to the Source Landing Directory
Step 4: Install the Collector
It is recommended to use a python environment such as pyenv or pipenv if you are not intending to install this package at the system level.
Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.
You can download the Latest Core Library and Athena whl via Platform Settings → Sources → Download Collectors
Run the following command to install the collector
pip install kada_collectors_extractors_<version>-none-any.whl
You will also need to install the common library kada_collectors_lib for this collector to function properly.
pip install kada_collectors_lib-<version>-none-any.whl
Under the covers this uses boto3 and may have OS dependencies see Quickstart - Boto3 1.35.63 documentation
Step 5: Configure the Collector
The collector requires a set of parameters to connect to and extract metadata from Athena
FIELD | FIELD TYPE | DESCRIPTION | EXAMPLE |
---|---|---|---|
key | string | Key for the AWS user | “xcvsdsdfsdf” |
secret | string | Secret for the AWS user | “sgsdfdsfg” |
server | string | This is the host that was onboarded in K for Athena | “athena.cloud” |
bucket | string | Bucket location to temporary store Athena query results, the extractor will use the user to execute queries and store results in this bucket location, it should be the full path starting with s3:// | “s3://mybucket/myathenaresults” |
catalogs | list<string> | List of catalogs to extract from Athena, most cases this is only AwsDataCatalog unless you have self managed catalogs. | [“AwsDataCatalog”] |
region | string | Set the region for AWS for where Athena exists | ap-southeast-2 |
role | string | If your access requires role assumption, place the full arn value here, otherwise leave it blank | “” |
output_path | string | Absolute path to the output location where files are to be written | “/tmp/output” |
mask | boolean | To enable masking or not | true |
compress | boolean | To gzip the output or not | true |
These parameters can be added directly into the run or you can use pass the parameters in via a JSON file. The following is an example you can use that is included in the example run code below.
kada_athena_extractor_config.json
Step 6: Run the Collector
The following code is an example of how to run the extractor. You may need to uplift this code to meet any code standards at your organisation.
This can be executed in any python environment where the whl has been installed. It will produce and read a high water mark file from the same directory as the execution called athena_hwm.txt and produce files according to the configuration JSON.
This is the wrapper script: kada_athena_extractor.py
Advance options:
If you wish to maintain your own high water mark files elsewhere you can use the above section’s script as a guide on how to call the extractor. The configuration file is simply the keyword arguments in JSON format. Refer to this document for more information Collector Integration General Notes | Storing HWM in another location
If you are handling external arguments of the runner yourself, you’ll need to consider additional items for the run method. Refer to this document for more information Collector Integration General Notes | The run method
key: AWS Access Key.
secret: AWS Secret.
region: Region.
server: Athena host that was onboarded on K.
role: AWS Role ARN if required to assume a role. bucket: s3 bucket used to temporary store results in the form s3://xxx.
catalogs: list of Catalogs from Athena to extract, by default this is just AwsDataCatalog.
output_path: full or relative path to where the outputs should go
compress: To gzip output files or not
Step 7: Check the Collector Outputs
K Extracts
A set of files (eg metadata, databaselog, linkages, events etc) will be generated. These files will appear in the output_path directory you set in the configuration details
High Water Mark File
A high water mark file is created in the same directory as the execution called athena_hwm.txt and produce files according to the configuration JSON. This file is only produced if you call the publish_hwm method.
If you want prefer file managed hwm, you can edit the location of the hwn by following these instructions Collector Integration General Notes | Storing High Water Marks (HWM)
Step 8: Push the Extracts to K
Once the files have been validated, you can push the files to the K landing directory.
You can use Azure Storage Explorer if you want to initially do this manually. You can push the files using python as well (see Airflow example below)
Example: Using Airflow to orchestrate the Extract and Push to K