Document toolboxDocument toolbox

Hevo

This page will walkthrough the setup of Hevo in K using the direct connect method.

Integration details

Scope

Included

Comments

Scope

Included

Comments

Metadata

YES

 

Lineage

YES

 

Usage

NO

Usage currently not captured for pipeline runs

Sensitive Data Scanner

N/A

 

Known limitations

  • Not all sources and destinations are currently covered. A future improvement is planned to cover sources & destinations generically.

  • Current sources implemented

    • AZURE_SQL

    • BIGQUERY

    • FTP

    • JIRA_CLOUD

    • MS_SQL

    • RESTAPI

  • Current destinations supported

    • SNOWFLAKE


Step 1) Create an API Key


Step 2) Add Hevo as a New Source

  • Select Platform Settings in the side bar

  • In the pop-out side panel, under Integrations click on Sources

  • Click Add Source and select Hevo


Step 3) Schedule Hevo source load

  • Select Platform Settings in the side bar

  • In the pop-out side panel, under Integrations click on Sources

  • Locate your new Hevo source and click on the Schedule Settings (clock) icon to set the schedule

Note that scheduling a source can take up to 15 minutes to propagate the change.


Step 4) Manually run an ad hoc load to test Hevo setup

  • Next to your new Source, click on the Run manual load icon

  • Confirm how your want the source to be loaded

  • After the source load is triggered, a pop up bar will appear taking you to the Monitor tab in the Batch Manager page. This is the usual page you visit to view the progress of source loads

     

A manual source load will also require a manual run of

  • DAILY

  • GATHER_METRICS_AND_STATS

To load all metrics and indexes with the manually loaded metadata. These can be found in the Batch Manager page