Installation and setup (Docker compose)
Docker Compose deployments for Hyperscale Compliance will be deprecated from January 2026. All versions will continue to be supported until January 2026; however, Kubernetes is now the primary deployment tool for Hyperscale Compliance. You can find more information on this Delphix Community page.
Note: the original deprecation date was changed from January 2025 and postponed until January 2026. This extension will provide you with additional time to transition to an alternative deployment method.
This section describes the steps you must perform to install the Hyperscale Compliance Orchestrator.
Hyperscale Compliance installation
Pre-requisites
Ensure the following requirements are met before installing the Hyperscale Compliance Orchestrator.
Download the Hyperscale tar file (
delphix-hyperscale-masking-x.0.0.tar.gz
) from download.delphix.com, wherex.0.0
should be changed to the version of Hyperscale being installed.Create a user that has permission to install Docker and Docker Compose.
Install Docker on the VM. The minimum supported Docker version is 20.10.7.
Install Docker Compose on the VM. The minimum supported Docker Compose version is 1.29.2.
Check if Docker and Docker Compose are installed by running the following command:
docker-compose -v
The above command displays an output similar to the following:
docker-compose version 1.29.2, build 5becea4c
docker -v
The above command displays an output similar to the following:
Docker version 20.10.7, build 3967b7d
(Only Required for Oracle Load Service) Download and install Linux-based Oracle’s instant client on the machine where the Hyperscale Compliance Orchestrator will be installed. The client should essentially include
instantclient-basic
(Oracle shared libraries) along withinstantclient-tools
containingOracle’s SQL*Loader
client. Both the packages instantclient-basic and instantclient-tools should be unzipped in the same directory. A group ownership id of 50 with a permission mode of 550 or a user id of 65436 with a permission mode of 500 must be set recursively on the directory where Oracle’s instant clientbinaries/libraries
will be installed. This is required by the Hyperscale Compliance Orchestrator to be able to read or execute from the directory.
Oracle Load does not support Object Identifiers(OIDs).
Procedure
Perform the following procedure to install the Hyperscale Compliance Orchestrator.
Unpack the Hyperscale tar file (where
x.0.0
should be changed to the version of Hyperscale being installed).
tar -xzf delphix-hyperscale-masking-x.0.0.tar.gz
Upon unpacking, you will find the Docker image tar files categorized as below:
Universal images common for all connectors.
controller-service.tar
masking-service.tar
proxy.tar
Oracle (required only for Oracle data source masking)
unload-service.tar
load-service.tar
MSSQL (required only for MS SQL data source masking)
mssql-unload-service.tar
mssql-load-service.tar
File Connector (required only for Delimited and Parquet File masking)
file-connector-unload-service.tar
file-connector-load-service.tar
MongoDB (required only for MongoDB database masking)
mongo-unload-service.tar
mongo-load-service.tar
Each deployment set consists of 5 images (3 Universal images and 2 images related to each dataset type). Proceed to load the required images into Docker as below:
For Oracle data source masking:
CODEdocker load --input unload-service.tar docker load --input load-service.tar docker load --input controller-service.tar docker load --input masking-service.tar docker load --input proxy.tar
For MS SQL data source masking:
CODEdocker load --input mssql-unload-service.tar docker load --input mssql-load-service.tar docker load --input controller-service.tar docker load --input masking-service.tar docker load --input proxy.tar
For Delimited and Parquet Files masking using file connector:
CODEdocker load --input file-connector-unload-service.tar docker load --input file-connector-load-service.tar docker load --input controller-service.tar docker load --input masking-service.tar docker load --input proxy.tar
For MongoDB data source masking:
CODEdocker load --input mongo-unload-service.tar docker load --input mongo-load-service.tar docker load --input controller-service.tar docker load --input masking-service.tar docker load --input proxy.tar
Create an NFS shared mount that will act as a Staging Area on the Hyperscale Compliance Orchestrator host where the Hyperscale Compliance Orchestrator will perform read/write/execute operations.
Create a ‘Staging Area’ directory. For example:
/mnt/provision
. Note you do not need to create amount-name
directory as required in previous releases. The shared path from the NFS server is the same as the value defined for configNFS_STORAGE_EXPORT_PATH
. The user(s) within each of the docker containers part of the Hyperscale Compliance Orchestrator and the appliance OS user(s) in the Continuous Compliance Engine(s), all have User ID 65436 and the group ownership ID is 50. Therefore, the directoryprovision
, requires the following permissions, based on the UID/GID of the OS user, so that the Hyperscale Compliance Orchestrator and the Continuous Compliance Engine(s) can perform read/write/execute operations on the staging area:If the Hyperscale Compliance OS user has a UID of 65436, then the directory
provision
, must have a UID of 65436 and 700 permission mode.If the Hyperscale Compliance OS user has a GID of 50 and does not have a UID of 65436, then the directory
provision
, must have a GID of 50 and 770 permission mode.
Mount the NFS shared directory on the staging area directory(
/mnt/provision
). This NFS shared storage can be created and mounted in two ways as detailed in the NFS Server Installation section. Based on the umask value for the user which is used to mount, the permissions for the staging area directory could get altered after the NFS share has been mounted. In such cases, the permissions(i.e. 770 or 700 whichever applies based on point 3a) must be applied again on the staging area directory.
After unpacking the tar, you will find the following sample docker-compose files available:
docker-compose-sample.yaml
,docker-compose-oracle-sample.yaml
,docker-compose-mssql-sample.yaml
,docker-compose-file-connector-sample.yaml
, anddocker-compose-mongo-sample.yaml
. These sample files can be used to create adocker-compose.yaml
file based on the connector you want to use. Configure the following docker container volume bindings for the docker containers by editing thedocker-compose.yaml
:For each of the docker containers, except the ‘proxy’ container, add a volume entry binding the staging area path (from 3(a),
/mnt/hyperscale
) to the Hyperscale Compliance Orchestrator container path(/etc/hyperscale
) as a volume binding under the ‘volumes’ section.(Only Required for Oracle Load Service) For the load-service docker container, add a volume entry that binds the path of the directory on the host where both the Oracle instant Client packages were unzipped to the path on the container (
/usr/lib/instantclient
) under the ‘volumes’ section.(Only Required for File Connector Unload Service) For File connector unload-service, the source NFS location has to be mounted to the container as docker volume in order for it to access the source files. The path mounted on the container is passed during the creation of the source connector-info.
CODE# Mount your source NFS location onto your Hyperscale Engine server sudo mount [-t nfs4] <source_nfs_endpoint>:<source_nfs_location> <nfs_mount_on_host>
CODE# Later mount <nfs_mount_on_host> as a docker volume to the file-connector unload-service container (in docker-compose.yaml, created using docker-compose-delimitedfiles-sample.yaml) unload-service: image: delphix-file-connector-unload-service-app:<HYPERSCALE VERSION> ... volumes: ... # Source files should be made available within the unload-service container file system # The paths within the container should be configured in the source section of connector-info [with type=FS] - <nfs_mount_on_host>:<source_files_mount_passed_during_connector_info_creation_path_in_contianer>
(Only Required for File Connector load Service) For the File Connector load-service, the target NFS location has to be mounted to the container as docker volume in order for it to access the target location where masked files will be placed. The path mounted on the container is passed during the creation of the target connector-info.
CODE# Mount your target NFS location onto your Hyperscale Engine server sudo mount [-t nfs4] <target_nfs_endpoint>:<target_nfs_location> <target_nfs_mount_on_host>
CODE# Later mount <nfs_mount_on_host> as a docker volume to the file-connector load-service container (in docker-compose.yaml, created using docker-compose-delimitedfiles-sample.yaml) load-service: image: delphix-file-connector-load-service-app:${VERSION} ... volumes: ... # Target location should be made available within the load-service container file system # The paths within the container should be configured in the target section of connector-info [with type=FS] - <target_nfs_mount_on_host>:<target_location_passed_during_connector_info_creation_in_container>
(Only Required for File Connector load Service) If using the staging push feature, you will need to provide source files on the NFS mount point which is mapped to the staging area. Alternatively, you can also provide format files with the source files. If providing format files, set the environment variable
USER_PROVIDED_FORMAT_FILE
to true, and by default, pyarrow writer will create the format file. In the below example/source/files/path
and staging area must share the same mount point.CODEunload-service: image: delphix-file-connector-unload-service-app:${VERSION} ... volumes: ... # Source location should be made available within the unload-service container file system # The paths within the container should be configured in the source section of connector-info [with type=FS] - /source/files/path:/etc/hyperscale/source load-service: image: delphix-file-connector-load-service-app:${VERSION} ... volumes: ... # Target location should be made available within the load-service container file system # The paths within the container should be configured in the target section of connector-info [with type=FS] - /target/files/path:/etc/hyperscale/target
(Optional) Some data (for example, logs, configuration files, etc.) that is generated inside the docker containers may be useful to debug possible errors or exceptions while running the hyperscale jobs, and as such it may be beneficial to persist these logs outside docker containers. The following data can be persisted outside the docker containers:
The logs generated for each service i.e. unload, controller, masking, and load services.
The sqlldr utility logs and control files at opt/sqlldr location in the load-service container.
The file-upload folder at
/etc/hyperscale/uploads
in the controller-service container
If you would like to persist the above data on your host, then you have the option to do the same by setting up volume bindings in the respective service as indicated below, that map locations inside the docker containers to locations on the host in the docker-compose.yaml
file. The host locations again must have a group ownership id of 50 with a permission mode of 770 or a user id of 65436 with a permission of 700, due to the same reasons as highlighted in step 3a.
Here are examples of the docker-compose.yaml file for Oracle, MS SQL, MongoDB data sources, Delimited file masking, and Parquet file masking:
For Oracle data source masking:
CODEversion: "3.7" services: controller-service: image: delphix-controller-service-app:${VERSION} healthcheck: test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1' interval: 30s timeout: 25s retries: 3 start_period: 30s depends_on: - unload-service - masking-service - load-service init: true networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-controller-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /home/hyperscale_user/logs/controller_service:/opt/delphix/logs - /mnt/hyperscale:/etc/hyperscale environment: - API_KEY_CREATE=${API_KEY_CREATE:-false} - EXECUTION_STATUS_POLL_DURATION=${EXECUTION_STATUS_POLL_DURATION:-12000} - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_CONTROLLER_SERVICE:-INFO} - API_VERSION_COMPATIBILITY_STRICT_CHECK=${API_VERSION_COMPATIBILITY_STRICT_CHECK:-false} - LOAD_SERVICE_REQUIREPOSTLOAD=${LOAD_SERVICE_REQUIRE_POST_LOAD:-true} - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=${SKIP_UNLOAD_SPLIT_COUNT_VALIDATION:-false} - SKIP_LOAD_SPLIT_COUNT_VALIDATION=${SKIP_LOAD_SPLIT_COUNT_VALIDATION:-false} - CANCEL_STATUS_POLL_DURATION=${CANCEL_STATUS_POLL_DURATION:-60000} - NFS_STORAGE_HOST=${NFS_STORAGE_HOST:-} - NFS_STORAGE_EXPORT_PATH=${NFS_STORAGE_EXPORT_PATH:-} - NFS_STORAGE_MOUNT_TYPE=${NFS_STORAGE_MOUNT_TYPE:-} - NFS_STORAGE_MOUNT_OPTION=${NFS_STORAGE_MOUNT_OPTION:-} - APPLICATION_NAME=${APPLICATION_NAME:-hs-staging-area} unload-service: image: delphix-unload-service-app:${VERSION} init: true environment: - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_UNLOAD_SERVICE:-INFO} - UNLOAD_FETCH_ROWS=${UNLOAD_FETCH_ROWS:-10000} networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-unload-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /mnt/hyperscale:/etc/hyperscale - /home/hyperscale_user/logs/unload_service:/opt/delphix/logs masking-service: image: delphix-masking-service-app:${VERSION} init: true networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-masking-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /mnt/hyperscale:/etc/hyperscale - /home/hyperscale_user/logs/masking_service:/opt/delphix/logs environment: - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_MASKING_SERVICE:-INFO} - INTELLIGENT_LOADBALANCE_ENABLED=${INTELLIGENT_LOADBALANCE_ENABLED:-true} load-service: image: delphix-load-service-app:${VERSION} init: true environment: - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_LOAD_SERVICE:-INFO} - SQLLDR_BLOB_CLOB_CHAR_LENGTH=${SQLLDR_BLOB_CLOB_CHAR_LENGTH:-20000} networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-load-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /mnt/hyperscale:/etc/hyperscale - /opt/oracle/instantclient_21_5:/usr/lib/instantclient - /home/hyperscale_user/logs/load_service:/opt/delphix/logs - /home/hyperscale_user/logs/load_service/sqlldr:/opt/sqlldr/ proxy: image: delphix-hyperscale-masking-proxy:${VERSION} init: true networks: - hyperscale-net ports: - "443:443" restart: unless-stopped depends_on: - controller-service #volumes: # Uncomment to bind mount /etc/config #- /nginx/config/path/on/host:/etc/config networks: hyperscale-net: volumes: hyperscale-load-data: hyperscale-unload-data: hyperscale-masking-data: hyperscale-controller-data:
For MS SQL data source masking:
CODEversion: "3.7" services: controller-service: image: delphix-controller-service-app:${VERSION} healthcheck: test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1' interval: 30s timeout: 25s retries: 3 start_period: 30s depends_on: - unload-service - masking-service - load-service init: true networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-controller-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /home/hyperscale_user/logs/controller_service:/opt/delphix/logs - /mnt/hyperscale:/etc/hyperscale environment: - API_KEY_CREATE=${API_KEY_CREATE:-false} - EXECUTION_STATUS_POLL_DURATION=${EXECUTION_STATUS_POLL_DURATION:-12000} - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_CONTROLLER_SERVICE:-INFO} - API_VERSION_COMPATIBILITY_STRICT_CHECK=${API_VERSION_COMPATIBILITY_STRICT_CHECK:-false} - LOAD_SERVICE_REQUIREPOSTLOAD=${LOAD_SERVICE_REQUIRE_POST_LOAD:-true} - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=${SKIP_UNLOAD_SPLIT_COUNT_VALIDATION:-false} - SKIP_LOAD_SPLIT_COUNT_VALIDATION=${SKIP_LOAD_SPLIT_COUNT_VALIDATION:-false} - CANCEL_STATUS_POLL_DURATION=${CANCEL_STATUS_POLL_DURATION:-60000} - NFS_STORAGE_HOST=${NFS_STORAGE_HOST:-} - NFS_STORAGE_EXPORT_PATH=${NFS_STORAGE_EXPORT_PATH:-} - NFS_STORAGE_MOUNT_TYPE=${NFS_STORAGE_MOUNT_TYPE:-} - NFS_STORAGE_MOUNT_OPTION=${NFS_STORAGE_MOUNT_OPTION:-} - APPLICATION_NAME=${APPLICATION_NAME:-hs-staging-area} unload-service: image: delphix-mssql-unload-service-app:${VERSION} init: true environment: - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_UNLOAD_SERVICE:-INFO} - UNLOAD_FETCH_ROWS=${UNLOAD_FETCH_ROWS:-10000} - SPARK_DATE_TIMESTAMP_FORMAT=${DATE_TIMESTAMP_FORMAT:-yyyy-MM-dd HH:mm:ss.SSSS} networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-unload-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /mnt/hyperscale:/etc/hyperscale - /home/hyperscale_user/logs/unload_service:/opt/delphix/logs masking-service: image: delphix-masking-service-app:${VERSION} init: true networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-masking-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /mnt/hyperscale:/etc/hyperscale - /home/hyperscale_user/logs/masking_service:/opt/delphix/logs environment: - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_MASKING_SERVICE:-INFO} - INTELLIGENT_LOADBALANCE_ENABLED=${INTELLIGENT_LOADBALANCE_ENABLED:-true} load-service: image: delphix-mssql-load-service-app:${VERSION} init: true environment: - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_LOAD_SERVICE:-INFO} - SQLLDR_BLOB_CLOB_CHAR_LENGTH=${SQLLDR_BLOB_CLOB_CHAR_LENGTH:-20000} - SPARK_DATE_TIMESTAMP_FORMAT=${DATE_TIMESTAMP_FORMAT:-yyyy-MM-dd HH:mm:ss.SSSS} networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-load-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /mnt/hyperscale:/etc/hyperscale - /home/hyperscale_user/logs/load_service:/opt/delphix/logs proxy: image: delphix-hyperscale-masking-proxy:${VERSION} init: true networks: - hyperscale-net ports: - "443:443" restart: unless-stopped depends_on: - controller-service #volumes: # Uncomment to bind mount /etc/config #- /nginx/config/path/on/host:/etc/config networks: hyperscale-net: volumes: hyperscale-load-data: hyperscale-unload-data: hyperscale-masking-data: hyperscale-controller-data:
For File connector masking: A sample file is available in the package called
docker-compose-file-connector-sample.yaml.
CODE# Copyright (c) 2021, 2024 by Delphix. All rights reserved. version: "3.7" services: controller-service: image: delphix-controller-service-app:${VERSION} healthcheck: test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1' interval: 30s timeout: 25s retries: 3 start_period: 30s depends_on: - unload-service - masking-service - load-service init: true networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-controller-data:/data environment: - API_KEY_CREATE=true - EXECUTION_STATUS_POLL_DURATION=120000 - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=INFO - API_VERSION_COMPATIBILITY_STRICT_CHECK=false - LOAD_SERVICE_REQUIREPOSTLOAD=false - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=false - SKIP_LOAD_SPLIT_COUNT_VALIDATION=false - CANCEL_STATUS_POLL_DURATION=60000 - SOURCE_KEY_FIELD_NAMES=unique_source_files_identifier # NFS storage configuration for staging area - NFS_STORAGE_HOST=${NFS_STORAGE_HOST:-} - NFS_STORAGE_EXPORT_PATH=${NFS_STORAGE_EXPORT_PATH:-} - NFS_STORAGE_MOUNT_TYPE=${NFS_STORAGE_MOUNT_TYPE:-} - NFS_STORAGE_MOUNT_OPTION=${NFS_STORAGE_MOUNT_OPTION:-} - APPLICATION_NAME=${APPLICATION_NAME:-hs-staging-area} unload-service: image: delphix-file-connector-unload-service-app:${VERSION} init: true networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-unload-data:/data # Staging area volume mount, here /mnt/parent_staging_area is used as an example - /mnt/parent_staging_area:/etc/hyperscale # Source files should be made available within the unload-service container file system # The paths within the container should be configured in the source section of connector-info [with type=FS] - /mnt/source_files:/mnt/source #- /mnt/source_files2:/mnt/source2 # In case source is hadoop #- /path/to/keytab_file/hadoop.keytab:/app/hadoop.keytab #- /path/to/hadoop/core-site.xml:/app/hadoop/etc/hadoop/core-site.xml #- /path/to/etc/krb5.conf:/etc/krb5.conf # In case user want to provide external Hadoop client for pyarrow writer then they can provide #- /path/to/hadoop/client/hadoop:/app/hadoop masking-service: image: delphix-masking-service-app:${VERSION} init: true networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-masking-data:/data # Staging area volume mount, here /mnt/parent_staging_area is used as an example - /mnt/parent_staging_area:/etc/hyperscale environment: - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=INFO - INTELLIGENT_LOADBALANCE_ENABLED=true load-service: image: delphix-file-connector-load-service-app:${VERSION} init: true networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-load-data:/data # Staging area volume mount, here /mnt/parent_staging_area is used as an example - /mnt/parent_staging_area:/etc/hyperscale # Target location should be made available within the load-service container file system # The paths within the container should be configured in the target section of connector-info [with type=FS] - /mnt/target_files:/mnt/target #- /mnt/target_files2:/mnt/target2 # In case target is hadoop #- /path/to/keytab_file/hadoop.keytab:/app/hadoop.keytab #- /path/to/hadoop/core-site.xml:/app/hadoop/etc/hadoop/core-site.xml #- /path/to/etc/krb5.conf:/etc/krb5.conf # In case user want to provide external Hadoop client for pyarrow writer then they can provide #- /path/to/hadoop/client/hadoop:/app/hadoop proxy: image: delphix-hyperscale-masking-proxy:${VERSION} init: true networks: - hyperscale-net ports: - "443:8443" - "80:8080" restart: unless-stopped depends_on: - controller-service networks: hyperscale-net: volumes: hyperscale-load-data: hyperscale-unload-data: hyperscale-masking-data: hyperscale-controller-data:
For MongoDB data source masking:
CODE# Copyright (c) 2021, 2023 by Delphix. All rights reserved. version: "3.7" services: controller-service: build: context: controller-service args: - VERSION=${VERSION} image: delphix-controller-service-app:${VERSION} healthcheck: test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1' interval: 30s timeout: 25s retries: 3 start_period: 30s depends_on: - unload-service - masking-service - load-service init: true networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-controller-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /home/hyperscale_user/logs/controller_service:/opt/delphix/logs - /mnt/hyperscale:/etc/hyperscale environment: - API_KEY_CREATE=${API_KEY_CREATE:-false} - EXECUTION_STATUS_POLL_DURATION=${EXECUTION_STATUS_POLL_DURATION:-120000} - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_CONTROLLER_SERVICE:-INFO} - API_VERSION_COMPATIBILITY_STRICT_CHECK=${API_VERSION_COMPATIBILITY_STRICT_CHECK:-false} - LOAD_SERVICE_REQUIREPOSTLOAD=${LOAD_SERVICE_REQUIRE_POST_LOAD:-true} - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=${SKIP_UNLOAD_SPLIT_COUNT_VALIDATION:-false} - SKIP_LOAD_SPLIT_COUNT_VALIDATION=${SKIP_LOAD_SPLIT_COUNT_VALIDATION:-false} - CANCEL_STATUS_POLL_DURATION=${CANCEL_STATUS_POLL_DURATION:-60000} - NFS_STORAGE_HOST=${NFS_STORAGE_HOST:-} - NFS_STORAGE_EXPORT_PATH=${NFS_STORAGE_EXPORT_PATH:-} - NFS_STORAGE_MOUNT_TYPE=${NFS_STORAGE_MOUNT_TYPE:-} - NFS_STORAGE_MOUNT_OPTION=${NFS_STORAGE_MOUNT_OPTION:-} - APPLICATION_NAME=${APPLICATION_NAME:-hs-staging-area} - SOURCE_KEY_FIELD_NAMES=database_name,collection_name - VALIDATE_UNLOAD_ROW_COUNT_FOR_STATUS=${VALIDATE_UNLOAD_ROW_COUNT_FOR_STATUS:-false} - VALIDATE_MASKED_ROW_COUNT_FOR_STATUS=${VALIDATE_MASKED_ROW_COUNT_FOR_STATUS:-false} - VALIDATE_LOAD_ROW_COUNT_FOR_STATUS=${VALIDATE_LOAD_ROW_COUNT_FOR_STATUS:-false} - DISPLAY_BYTES_INFO_IN_STATUS=${DISPLAY_BYTES_INFO_IN_STATUS:-true} - DISPLAY_ROW_COUNT_IN_STATUS=${DISPLAY_ROW_COUNT_IN_STATUS:-false} unload-service: build: context: unload-service args: - VERSION=${VERSION} image: delphix-mongo-unload-service-app:${VERSION} init: true environment: - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_UNLOAD_SERVICE:-INFO} - UNLOAD_FETCH_ROWS=${UNLOAD_FETCH_ROWS:-10000} - CONCURRENT_EXPORT_LIMIT=${CONCURRENT_EXPORT_LIMIT:-10} - HIKARI_MAX_LIFE_TIME=${UNLOAD_HIKARI_MAX_LIFE_TIME:-1800000} - HIKARI_KEEP_ALIVE_TIME=${UNLOAD_HIKARI_KEEP_ALIVE_TIME:-300000} - FILE_DELIMITER=${FILE_DELIMITER:-,} - FILE_ENCLOSURE=${FILE_ENCLOSURE:-"} - FILE_ESCAPE_ENCLOSURE=${FILE_ESCAPE_ENCLOSURE:-"} networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-unload-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /mnt/hyperscale:/etc/hyperscale - /home/hyperscale_user/logs/unload_service:/opt/delphix/logs masking-service: build: context: masking-service args: - VERSION=${VERSION} image: delphix-masking-service-app:${VERSION} init: true networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-masking-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /mnt/hyperscale:/etc/hyperscale - /home/hyperscale_user/logs/masking_service:/opt/delphix/logs environment: - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_MASKING_SERVICE:-INFO} - INTELLIGENT_LOADBALANCE_ENABLED=${INTELLIGENT_LOADBALANCE_ENABLED:-true} load-service: build: context: load-service args: - VERSION=${VERSION} image: delphix-mongo-load-service-app:${VERSION} init: true environment: - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_LOAD_SERVICE:-INFO} - SQLLDR_BLOB_CLOB_CHAR_LENGTH=${SQLLDR_BLOB_CLOB_CHAR_LENGTH:-20000} - HIKARI_MAX_LIFE_TIME=${LOAD_HIKARI_MAX_LIFE_TIME:-1800000} - HIKARI_KEEP_ALIVE_TIME=${LOAD_HIKARI_KEEP_ALIVE_TIME:-300000} networks: - hyperscale-net restart: unless-stopped volumes: - hyperscale-load-data:/data # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths. - /mnt/hyperscale:/etc/hyperscale - /home/hyperscale_user/logs/load_service:/opt/delphix/logs proxy: build: nginx image: delphix-hyperscale-masking-proxy:${VERSION} init: true networks: - hyperscale-net ports: - "443:443" - "80:80" restart: unless-stopped depends_on: - controller-service #volumes: # Uncomment to bind mount /etc/config #- /nginx/config/path/on/host:/etc/config networks: hyperscale-net: volumes: hyperscale-load-data: hyperscale-unload-data: hyperscale-masking-data: hyperscale-controller-data:
(Optional) To modify the default Hyperscale configuration properties for the application, see Configuration Settings.
Run the application from the same location where you extracted the
docker-compose.yaml
file with thedocker-compose up -d
command.To check if the application is running, use the
docker-compose ps
command. The output of this command should show five containers up and running.To access application logs of a given container, run the
docker logs -f service_container_name>
command. The service container name can be received from the output of thedocker-compose ps
command.Run the
sudo docker-compose down
command to stop the application (if required).
Once the application starts, an API key will be generated that will be required to authenticate with the Hyperscale Compliance Orchestrator. This key will be found in the docker container logs of the controller service. You can either look for the key from the controller service logs location that was set as a volume binding in the
docker-compose.yaml
file or use thedocker logs -f <service_container_name>
command to retrieve the logs. The service container name can be received from the output of thedocker-compose ps
command.The above command displays an output similar to the following where the string
NEWLY GENERATED API KEY
can be received from the log:CODE2022-05-18 12:24:10.981 INFO 7 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-05-18 12:24:10.982 INFO 7 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 9699 ms NEWLY GENERATED API KEY: 1.89lPH1dHSJQwHuQvzawD99sf4SpBPXJADUmJS8v00VCF4V7rjtRFAftGWygFfsqM
To authenticate with the Hyperscale Compliance Orchestrator, you must use the API key and include the HTTP Authorization request header with the type apk;
apk <API Key>
.For more information, see the Authentication section under Accessing the Hyperscale Compliance API.
Continuous Compliance Engine Installation
Delphix Continuous Compliance Engine is a multi-user, browser-based web application that provides complete, secure, and scalable software for your sensitive data discovery, masking, and tokenization needs while meeting enterprise-class infrastructure requirements. For information about installing the Continuous Compliance Engine, see Continuous Compliance Engine Installation documentation.