Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Scroll ignore
scroll-viewporttrue
scroll-pdftrue
scroll-officetrue
scroll-chmtrue
scroll-htmltrue
scroll-docbooktrue
scroll-eclipsehelptrue
scroll-epubtrue

Open in new tab

K is a Data Knowledge platform enabling data discovery, knowledge management and data governance for all data users.

This page will provide a brief introduction to K and

...

highlight some of it’s key features that make it unique.

...

Introduction to K
Anchor
Introduction
Introduction

K is a Data Knowledge platform for discovering, profiling and understanding how data products (data sets, analysis, reports etc) across an Enterprise is used.

K focuses on identifying and storing how users work with data; leveraging this information to enable data producers to improve their products; data owners to take accountability for the proper use of their data; and to scale hidden knowledge to all data workers. The product vision is to become the central platform for all Enterprise data users to easily discover, understand and govern the use of data.

...

...

K Services

...

Component

...

Description

...

Extractors

...

The service is used for connecting to, extracting and loading metadata and logs from data sources and tools.

The extractors can also be deployed as a collector service for on-premise sources when using the K SaaS offering if access to between the on-premise source and the SaaS offering is not available.

...

Profiler

...

The service is used to identify and profile data assets and their usage. A set of proprietary algorithms are used to automatically match and analyse data assets over their lifecycle.

...

Identity

...

The service is used to integrate with the Enterprise Identity Management service to provide single sign on.

...

Search

...

The service provides fast, accurate and contextual search for all assets within K.

...

Applications

...

The service is used to access dedicated applications built to solve specific data problems. E.g. migration assessment, impact assessment etc.

Interfaces

...

Component

...

Description

...

API

...

This interface is used by applications and services to interact and access data managed by K.

...

Web Portal

...

This interface is used by end users (e.g. Data managers, analysts etc) to access K and its services.

...

Notifications

...

This interface is used to engage with end users via push notifications e.g. Email.

Stores

...

Component

...

Description

...

Metadata

...

The metadata store is used to store the details and relationships between data assets, reports, users, teams and other objects within the data ecosystem.

...

Timeseries

...

The timeseries is used to store each data asset, person or content item and its lifecycle over time.

...

Index

...

Each object in the data ecosystem is added to a search index to enable the contextual search service.

Inputs

...

Component

...

Description

...

Data Sources

...

Data sources (e.g. Teradata, Hadoop, Snowflake, SQL Server etc.) where data is stored and used by the Enterprise data teams. K has integrators for many on-premise and cloud data sources and can also ingest custom data sources through the K ingestion framework.

...

Data Tools

...

Reporting and Analytics applications (e.g. Tableau, Power BI etc.) used by the Enterprise data teams to create, manage and distribute content. K has integrators for common data tools and can also ingest custom data tools through the K ingestion framework.

...

Identity / SSO

...

Identity provider and user management sources (e.g. LDAP, SAML, OpenID Connect) that can provide single sign on and user and team data.

...

There are 2 options for deploying the K platform: 1) On your Cloud; or 2) Using the KADA hosted platform.

The following document covers deploying the K platform in your cloud. Contact KADA for more details about the SaaS option.

Your cloud

K is deployed in your cloud using a Kubernetes service.

Typical Kubernetes services used to deploy K include OpenShift, AWS’s Elastic Kubernetes Service (EKS), Azure Kubernetes Services (AKS) and Google Kubernetes Engine (GKE). T

The following diagram outlines how K is deployed in a typical Enterprise environment.

...

Kubernetes Service

...

Components

...

Details

...

Nodes

...

K is deployed across a number of nodes (minimum of 4 nodes). Each node requires a minimum of 4 vCPU and 16gb Memory. Contact the KADA team to work through the right sizing for your data ecosystem.

Example specifications for cloud services include:

  • AWS Elastic Kubernetes Services (EKS) - m5xlarge

  • Azure Kubernetes Services (AKS) - D4as_v4

  • Openshift OCP v3/v4.

...

Object store

...

A location for landing files from data sources and data tools before processing by K. This location must be accessible by the Kubernetes Service.

The typical size for the Object store is 200Gb but may need to expand depending on your data retention needs.

Other Components

...

Components

...

Details

...

KADA Repository

Your Image Registry connects to the KADA repository hosted externally (internet access required) to deploy and update the K platform.

This approach enables you to quickly and easily download the K updates.

Considerations

There a several considerations that should be checked prior to setting up K on your Kubernetes environment.

...

Policies:

  • Kubernetes service must have access to the Object Store.

  • In the case where the Kubernetes service is using a Cloud Provider’s managed service (e.g. AWS, GCP, Azure) this may require cloud policies to be created to enable the service with the right read/write permissions.

  • Please consult your Cloud Provider’s documentation

Internet Access:

...

The K platform does NOT need internet access.

...

Data Discovery

K is your metadata copilot for finding the right data to use for any use case.

K's unified Search Experienceenables business and technical users in your organisation to find the right data asset across databases, reporting platforms, ML feature stores, ETL tools and more.

To help optimise search results, K calculates Trust Scoreto improve search relevancy and ensure that the most suitable, up to date, and relevant data asset is promoted.

Users can create Filters using all of the metadata collected to customise their search experience. Saved filters that helps a user save time can be easily shared to colleagues.

K's best in class automated Lineage Maps make it easy to navigate the data ecosystem and understand upstream dependencies, or downstream impacts from any data asset. Users can also personalise their map with custom filters, highlight trusted paths and drill through to knowledge.

...

Knowledge Management

K uses automated intelligence and smart engagement to build meaningful data profiles by analysing how your data is created and used.

From one Data Profile Page you can directly access all information specific to that data asset including, data issues, data quality, data lineage and more.

K automatically suggests glossary terms that can be linked to a Data Asset and can Auto Generate Descriptions through K.ai.

Crowdsourcing and collaboration is made easy through smart features that link your current workflow tools or Collaboration Channels(e.g. MS Teams, Slack or Discord). This blend of automation and crowdsourcing takes the boring manual work out of building your data catalog.

K knows who the top users of each data asset are. Through K, you can directly Ask Questions to Top Users that are most likely to know the answer. Similarly, when a decision, note, or change is added, K automatically notifies any recent users of the report, so they are kept in the loop.

Through key Usage Metrics so you can also understand how a data asset is used, and where it is in its lifecycle.

...

Data Governance

K can help data governance teams fast-track the process to understand their data ecosystem and design the appropriate governance framework. Key features in K that help data governance teams include:

  • Detect PII - K has inbuilt Personally Identifiable Information scanners that can help quickly identify potential PII located in your data tables and columns.

  • Automate Data Tagging - Remove the manual nature of tagging data and automate the process through business rules to minimise data governance gaps

  • Dashboards and Insights - Tailored dashboards can help target and simplify governance effort. Data Owners and Stewards can view key governance and data quality metrics for the data assets they own

  • Data Change Timeline - When it's needed, data governance managers can drill into historical data change timeline to investigate what has happened or when data decisions were made.

  • Centralise DQ Results - Use K to capture DQ results from tools like Great Expectations and DBT. ​​Alert the right people to DQ failures and leverage workflow tools like JIRA to address data problems.

  • Notify impacted stakeholders - Automate data governance change management by using K to identify and inform impacted stakeholders on any relevant changes implemented by the data governance team (e.g. Updates to PI data policy)

  • Bulk Functions - Speed up the process to update data properties, link to collections, add tags and create lists through a range of K Bulk Functions