Informatica (via Collector method) - v3.0.0
This collector is for Informatica versions prior to Informatica Intelligent Cloud Services (IICS)
About Collectors
Pre-requisites
Collector Server Minimum Requirements
Informatica Requirements
Informatica 9.1+ with repository hosted in Oracle.
Access to Informatica Repository (see section below)
Establish Informatica Repository Access
Create an Oracle user with read access to all tables in the Informatica repository database.
Establish Informatica Server Access
Create a user that has read access to the Informatica Server.
Step 1: Create the Source in K
Create a Informatica source in K
Go to Settings, Select Sources and click Add Source
Select “Load from File” option
Give the source a Name - e.g. Informatica Production
Add the Host name for the Informatica Server
Click Finish Setup
Step 2: Getting Access to the Source Landing Directory
Step 3: Install the Collector
It is recommended to use a python environment such as pyenv or pipenv if you are not intending to install this package at the system level.
Some python packages also have dependencies on the OS level packages, so you may be required to install additional OS packages if the below fails to install.
You can download the Latest Core Library and whl via Platform Settings → Sources → Download Collectors
Run the following command to install the collector
pip install kada_collectors_extractors_<version>-none-any.whl
You will also need to install the common library kada_collectors_lib for this collector to function properly.
pip install kada_collectors_lib-<version>-none-any.whl
You may require an ODBC package for the OS to be installed as well as an oracle client library package if do you not have one already, see Oracle Instant Client - Free tools and libraries for connecting to Oracle Database | Oracle Australia
Step 4: Generate runtime mappings
In your environment you maybe using runtime overrides for parameters in your Informatica jobs. KADA uses the runtime overrides to resolve lineage for parameter driven jobs.
Use the script below to generate infacmd commands to extract session logs in XML format.
Replace any < >
with values for your Informatica environment.
select
'call infacmd.bat isp getsessionlog -dn <INFORMATICA_DOMAIN> -hp <HOST>:<PORT> -un <SERVER USERNAME> -pd <SERVER PASSWORD> -is <SERVERNAME> -rs <REPO NAME> -ru <REPO USERNAME> -rp <REPO PASSWORD> -fm xml -fn ' || ws.subject_area || ' -wf ' || ws.workflow_name || ' -ss ' || CASE WHEN hierarchy_structure is null then ws.instance_name ELSE '"' || substr(hierarchy_structure, 2) || '"' END || ' -lo <C:\\output\\path\\for\\logs\\>' || ws.workflow_id || '_' || ws.task_id || '_' || ws.instance_id as cmd
from (
SELECT ti.instance_name,
ti.task_id,
ti.version_number,
wws.instance_id,
wf.workflow_id,
wf.workflow_name,
wf.workflow_comments,
wf.server_name,
wf.subject_area,
hierarchy_structure,
path
FROM (
select path
, TO_NUMBER(substr(path, 2, instr(path,'/',1, 2)-2)) as workflow_id
, TO_NUMBER(substr(path, -instr(reverse(path),'/', 1, 2)+1, instr(reverse(path),'/', 1, 2)-2)) as task_id
, hierarchy_structure
, instance_id
from (SELECT DISTINCT '/' || temp1.task_id AS path
, temp1.task_name AS hierarchy_structure
, 0 as instance_id
FROM opb_task temp1, opb_subject temp2
WHERE temp1.subject_id = temp2.subj_id
AND temp1.task_type = 71 -- workflows
UNION ALL
SELECT DISTINCT temp1.path
, temp1.task_name AS hierarchy_structure
, instance_id
FROM (SELECT opb_task_inst.workflow_id, opb_task_inst.task_id, opb_task_inst.instance_id, LEVEL depth,
SYS_CONNECT_BY_PATH(opb_task_inst.workflow_id ,'/') || '/' || opb_task_inst.task_id || '/' path,
SYS_CONNECT_BY_PATH(opb_task_inst.instance_name ,'/') task_name
FROM opb_task_inst
WHERE opb_task_inst.task_type IN (68,70)
START WITH workflow_id IN (select distinct w.workflow_id
from rep_workflows w
join rep_task_inst ti on w.workflow_id = ti.workflow_id
where ti.task_type_name = 'Worklet'
and w.subject_area not in ('<SUBJECT_AREAS_TO_EXCLUDE>')
)
CONNECT BY PRIOR opb_task_inst.task_id = opb_task_inst.workflow_id
) temp1,
opb_task temp2,
opb_subject temp3
WHERE temp2.subject_id = temp3.subj_id
AND temp2.task_id = SUBSTR(temp1.path,2, INSTR(temp1.path,'/', 1, 2) -2 )
ORDER BY path ASC )
where instance_id <> 0
) wws
JOIN rep_task_inst ti on ti.task_id = wws.task_id and ti.task_type = 68
JOIN REP_WORKFLOWS wf on wws.workflow_id = wf.workflow_id
UNION
SELECT ti.instance_name,
ti.task_id,
ti.version_number,
ti.instance_id,
wf.workflow_id,
wf.workflow_name,
wf.workflow_comments,
wf.server_name,
wf.subject_area,
'' as hierarchy_structure,
'' as path
FROM REP_WORKFLOWS wf
JOIN rep_task_inst ti on ti.workflow_id = wf.workflow_id and ti.task_type = 68
where wf.subject_area not in ('<SUBJECT_AREAS_TO_EXCLUDE>')
) ws
join (select distinct workflow_id as workflow_id from rep_wflow_run) active_wflows on ws.workflow_id = active_wflows.workflow_id
The commands can be be combined in a bat script like the example below to dump out the latest log per session.
Use kada_informatica_runtime_parser.py
to generate a runtime_session_overrides.json
which will be used by the Informatica extractor.
kada_informatica_runtime_parser.py
Step 5: Configure the Collector
The collector requires a set of parameters to connect to and extract metadata from Informatica
FIELD | FIELD TYPE | DESCRIPTION | EXAMPLE |
---|---|---|---|
username | string | Username to log into Oracle | “myuser” |
password | string | Password to log into Oracle |
|
dsn | string | Datasource Name for Oracle, this can be one of the following forms <tnsname> | “preprod” |
repo_owner | string | This is the owner of all the tables required by the extractor | “inf” |
oracle_client_path | string | Full path to the location of the Oracle Client libraries | “/tmp/drivers/lib/oracleinstantclient_11_9” |
cached | boolean | If set to true if will prevent re-extracting data | false |
input_path | string | Absolute path to the input location where | “/tmp/input” |
output_path | string | Absolute path to the output location where files are to be written | “/tmp/output” |
mask | boolean | To enable masking or not | true |
compress | boolean | To gzip the output or not | true |
KADA provides an out of the box script that reads a configuration JSON file and runs the extractor. Below is the configuration file.
kada_informatica_extractor_config.json
Step 6: Run the Collector
The following code is an example of how to run the extractor. You may need to uplift this code to meet any code standards at your organisation.
This can be executed in any python environment where the whl has been installed.
This is the wrapper script: kada_informatica_extractor.py
Advance options:
If you wish to maintain your own high water mark files else where you can use the above section’s script as a guide on how to call the extractor. The configuration file is simply the keyword arguments in JSON format.
If you are handling external arguments of the runner yourself, you’ll need to consider the following for the run method Collector Integration General Notes | Extractor run method
username: username to sign into server
password: password to sign into server
dsn: server address
repo_owner: Oracle table owner
oracle_client_path: library path for the Oracle Instant Client
cached: Set to prevent re-extracting data
input_path: full or relative path to the directory containing the input files
output_path: full or relative path to where the outputs should go
compress: To gzip output files or not
The runtime parser can also be called in isolation
input_path: full or relative path to the directory containing the input files
output_path: full or relative path to where the outputs should go
To edit the internal SQL being run refer to Collector Integration General Notes | Adding Custom SQL
Step 7: Check the Collector Outputs
K Extracts
A set of files (eg metadata, databaselog, linkages, events etc) will be generated. These files will appear in the output_path directory you set in the configuration details
High Water Mark File
A high water mark file is created in the same directory as the execution called informatica_hwm.txt and produce files according to the configuration JSON. This file is only produced if you call the publish_hwm method.
If you want prefer file managed hwm, you can edit the location of the hwn by following these instructions Collector Integration General Notes
Step 8: Push the Extracts to K
Once the files have been validated, you can push the files to the K landing directory.
You can use Azure Storage Explorer if you want to initially do this manually. You can push the files using python as well (see Airflow example below)
Example: Using Airflow to orchestrate the Extract and Push to K