How to sync a Hyperscale job
How to import a job from Continuous Compliance Engine
The POST /import
endpoint is useful when you have a database masking job setup on a Continuous Compliance Engine and need to use the same masking inventory in a Hyperscale job. You can export the masking job details from a Continuous Compliance Engine and import them into the Hyperscale Compliance Orchestrator using the below steps.
Export the masking job from the Delphix Continuous Compliance Engine that needs to be imported on the Hyperscale Engine for the dataset preparation. For more information about exporting a job, refer to Export the job.
After the job is exported, you can make a request on the Hyperscale Engine with the new
/import
API endpoint to upload the response blob along withdata_info_settings
(Optional) for the source and target dataset.
The following is an example of the request blob
.
{
"exportResponseMetadata": {
"exportHost": "1.1.1.1",
"exportDate": "Tue Sep 13 12:55:31 UTC 2022",
"requestedObjectList": [
{
"objectIdentifier": {
"id": 3
},
"objectType": "MASKING_JOB",
"revisionHash": "2873bd283bd"
}
],
"exportedObjectList": [
{
"objectIdentifier": {
"id": 2
},
"objectType": "SOURCE_DATABASE_CONNECTOR",
"revisionHash": "8723bd8273b"
},
{
"objectIdentifier": {
"id": 4
},
"objectType": "DATABASE_CONNECTOR",
"revisionHash": "273db2738vd"
},
{
"objectIdentifier": {
"id": 4
},
"objectType": "DATABASE_RULESET",
"revisionHash": "f8c0997c804c"
}
]
},
"blob": "983nd0239nd923ndf023nfd2p3nd923dn239dn293fn293fnb2",
"signature": "923nd023nd02",
"publicKey": "f203fn23fn203[fn230[f",
"data_info_settings": [
{
"prop_key": "unload_split",
"prop_value": "2",
"apply_to": "SOURCE"
},
{
"prop_key": "stream_size",
"prop_value": "65536",
"apply_to": "TARGET"
}
]
}
3. The Hyperscale Engine will then process the required data object from the sync bundle and prepare the connector and data objects that are required for the hyperscale job creation.
After successful import,
You must provide the password for connectors manually. To do so, perform the following steps:
Get the newly created data-set using
GET /data-sets/{dataSetId}
to get the newly created connector-info id.Copy the connector-id and call the
GET /connector-info/{connectorInfoId}
and copy the response.Use the
PUT /connector-info/{connectorInfoId}
and in the body, paste the GET response and add the new password field with password value in the source and target to update the connector password.If the bundle is passphrase protected, then the same needs to be provided while importing the bundle in the API header as “passphrase”. For more information about how to export a passphrase encrypt bundle, refer to the Export the Object section.
4. The Hyperscale Engine will provide the data object identifier that can be further used as it is (after updating the passwords of the associated connector) to create a hyperscale job or if needed, can also be updated before configuring a job. The following is an example of the response
.
{
"data_set_id": id
}
After successful import, you must provide the password for connectors manually. To do so, perform the following steps:
Get newly created data-set using
GET /data-sets/{dataSetId}
to get the newly created connector-info id.Copy the
connector-id
and call theGET /connector-info/{connectorInfoId}
and copy the response.Use the
PUT /connector-info/{connectorInfoId}
and in the body, paste theGET
response and add the new password field with password value in the source and target to update the connector password.If the bundle is passphrase protected, then the same needs to be provided while importing the bundle in the API header as “passphrase”. For more information about how to export passphrase encrypt bundle, refer to the Export the object section.
How to re-import a Job from Continuous Compliance Engine
To update the existing dataset on the Hyperscale Compliance Orchestrator with a refreshed ruleset from the Continuous Compliance Engine, use PUT /import/{datasetId}
endpoint.
Export the masking job from the Delphix Continuous Compliance Engine having refreshed ruleset and re-import the exported bundle into Hyperscale Compliance Orchestrator by providing the existing dataSetId.
The data_info_settings
provided in this update request will only be applicable to objects(in the export bundle) which are not present in the existing dataset on Hyperscale Orchestrator.
Script to automatically import/re-import a Job from Continuous Compliance Engine
Hyperscale provides a utility script to automate the steps of syncing masking jobs inventory from Continuous Compliance Engine into the connector and dataset info of Hyperscale Compliance Orchestrator. This utility script is bundled with the release tar file and can be found at <deployment_directory>/tools/import-scripts/
.
How to sync global settings from a Delphix Continuous Compliance Engine
The POST /sync-compliance-engines
endpoint is useful when you have global objects set up on a Continuous Compliance Engine and need to use the same global-objects-like algorithms in a Hyperscale job. You can export the details of the global object from a Continuous Compliance Engine and import them into the Hyperscale Compliance Orchestrator using the below steps.
Export the global settings from the Delphix Continuous Compliance Engine that needs to be imported on the Hyperscale Clustered Continuous Compliance Engines. For more information about exporting global settings, refer to Syncing all Global Objects.
Once the bundle is exported, you can make a request on the Hyperscale Engine with the new
/sync-compliance-engines
endpoint to upload the response blob along with a list of Hyperscale Clusters Compliance Engines. For more information, refer to the /sync-compliance-engines API page. The following is an example of therequest blob
.
{
"exportResponseMetadata": {
"exportHost": "1.1.1.1",
"exportDate": "Tue Sep 13 12:55:31 UTC 2022",
"requestedObjectList": [
{
"objectIdentifier": {
"id": "global"
},
"objectType": "GLOBAL_OBJECT",
"revisionHash": "897weqwj76"
}
],
"exportedObjectList": [
{
"objectIdentifier": {
"id": 12
},
"objectType": "PROFILE_EXPRESSION",
"revisionHash": "7dc67asch8a"
},
{
"objectIdentifier": {
"id": "BIOMETRIC"
},
"objectType": "DOMAIN",
"revisionHash": "7edb8ewbd8w"
},
{
"objectIdentifier": {
"algorithmName": "dlpx-core:Email SL"
},
"objectType": "USER_ALGORITHM",
"revisionHash": "87h823d23d23"
}
]
},
"blob": "39fdn23d9834fn3948f348fbw3pd9234nf9p4hf89",
"signature": "7823hd823bd8",
"publicKey": "892d3un293dn2p39db8283",
"compliance_engine_ids": [
1,
2
]
}
After import, if Hyperscale Clustered Continues Compliance Engines already have same objects with same id or properties, then those objects will be overwritten.
If the bundle is passphrase protected, then the same needs to be provided while importing the bundle in the header as “passphrase”. For more information about how to export passphrase encrypt bundle, refer to the Export the object section.
Limitations
Hyperscale Job Sync feature has the following limitations:
The default maximum supported size for syncing a document or request is 50 MB. You have the option to customize this by mounting a custom
nginx.conf
under the volumes of the proxy service in thedocker-compose.yaml
file and specify client_max_body_size with the new value. For more information, refer to Custom Configuration.Pre and post-script import from the Continuous Compliance Engine to Hyperscale Engine is not supported.
Import of Kerberos and Custom JDBC drivers connector-based making job is not supported.