Skip to main content
Skip table of contents

Installation and setup (Docker compose)

Delphix has announced the depreciation of support for Docker Compose with Hyperscale version 17.0.0. The January 2024 release starts a 12-month depreciation period for all supported versions on Docker Compose.  All prior and current product versions will continue to be supported on Docker Compose until January 2025. It is highly recommended that new Hyperscale installations be performed on Kubernetes.

This section describes the steps you must perform to install the Hyperscale Compliance Orchestrator.

Hyperscale Compliance installation

Pre-requisites

Ensure the following requirements are met before installing the Hyperscale Compliance Orchestrator.

  • Download the Hyperscale tar file (delphix-hyperscale-masking-x.0.0.tar.gz) from download.delphix.com, where x.0.0 should be changed to the version of Hyperscale being installed.

  • Create a user that has permission to install Docker and Docker Compose. 

  • Install Docker on the VM. The minimum supported Docker version is 20.10.7.

  • Install Docker Compose on the VM. The minimum supported Docker Compose version is 1.29.2.

  • Check if Docker and Docker Compose are installed by running the following command:

    • docker-compose -v

      • The above command displays an output similar to the following: docker-compose version 1.29.2, build 5becea4c

    • docker -v

      • The above command displays an output similar to the following: Docker version 20.10.7, build 3967b7d

  • [Only Required for Oracle Load Service] Download and install Linux-based Oracle’s instant client on the machine where the Hyperscale Compliance Orchestrator will be installed. The client should essentially include instantclient-basic (Oracle shared libraries) along with instantclient-tools containing Oracle’s SQL*Loader client. Both the packages instantclient-basic and instantclient-tools should be unzipped in the same directory. A group ownership id of 50 with a permission mode of 550 or a user id of 65436 with a permission mode of 500 must be set recursively on the directory where Oracle’s instant client binaries/libraries will be installed. This is required by the Hyperscale Compliance Orchestrator to be able to read or execute from the directory.

Oracle Load does not support Object Identifiers(OIDs).

Procedure

Perform the following procedure to install the Hyperscale Compliance Orchestrator.

  1. Unpack the Hyperscale tar file (where x.0.0 should be changed to the version of Hyperscale being installed).

tar -xzf delphix-hyperscale-masking-x.0.0.tar.gz

  1. Upon unpacking, you will find the Docker image tar files categorized as below:

  • Universal images common for all connectors.

    • controller-service.tar

    • masking-service.tar

    • proxy.tar

  • Oracle (required only for Oracle data source masking)

    • unload-service.tar

    • load-service.tar

  • MSSQL (required only for MS SQL data source masking)

    • mssql-unload-service.tar

    • mssql-load-service.tar

  • Delimited Files (required only for Delimited Files masking)

    • delimited-unload-service.tar

    • delimited-load-service.tar

  • MongoDB (required only for MongoDB database masking)

    • mongo-unload-service.tar

    • mongo-load-service.tar

  • Parquet files (required only for Parquet file masking)

    • parquet-unload-service.tar

    • parquet-load-service.tar

Each deployment set consists of 5 images (3 Universal images and 2 images related to each dataset type). Proceed to load the required images into Docker as below:

  • For Oracle data source masking:

    CODE
    docker load --input unload-service.tar
    docker load --input load-service.tar
    docker load --input controller-service.tar
    docker load --input masking-service.tar
    docker load --input proxy.tar
  • For MS SQL data source masking:

    CODE
    docker load --input mssql-unload-service.tar
    docker load --input mssql-load-service.tar
    docker load --input controller-service.tar
    docker load --input masking-service.tar
    docker load --input proxy.tar
  • For Delimited Files masking:

    CODE
    docker load --input delimited-unload-service.tar
    docker load --input delimited-load-service.tar
    docker load --input controller-service.tar
    docker load --input masking-service.tar
    docker load --input proxy.tar
  • For MongoDB data source masking:

    CODE
    docker load --input mongo-unload-service.tar
    docker load --input mongo-load-service.tar
    docker load --input controller-service.tar
    docker load --input masking-service.tar
    docker load --input proxy.tar
  • For Parquet masking:

    CODE
    docker load --input parquet-unload-service.tar
    docker load --input parquet-load-service.tar
    docker load --input controller-service.tar
    docker load --input masking-service.tar
    docker load --input proxy.tar
  1. Create an NFS shared mount, that will act as a Staging Area, on the Hyperscale Compliance Orchestrator host where the Hyperscale Compliance Orchestrator will perform read/write/execute operations: 

    1. Create a ‘Staging Area’ directory. For example: /mnt/hyperscale/staging_area. The user(s) within each of the docker containers part of the Hyperscale Compliance Orchestrator and the appliance OS user(s) in the Continuous Compliance Engine(s), all have the user id as 65436 and/or group ownership id as 50. As such, the ‘staging_area’ directory, along with the directory(hyperscale) one level above, require the following permissions, based on the UID/GID of the OS user, so that the Hyperscale Compliance Orchestrator and the Continuous Compliance Engine(s) can perform read/write/execute operations on the staging area:

    2. If the Hyperscale Compliance OS user has a UID of 65436, then the ‘staging_area’ directory, along with the directory(hyperscale) one level above, must have a UID of 65436 and 700 permission mode.

    3. If the Hyperscale Compliance OS user has a GID of 50 and does not have a UID of 65436, then the ‘staging_area’ directory, along with the directory(hyperscale) one level above, must have a GID of 50 and 770 permission mode.

    4. Mount the NFS shared directory on the staging area directory(/mnt/hyperscale/staging_area). This NFS shared storage can be created and mounted in two ways as detailed in the NFS Server Installation section. Based on the umask value for the user which is used to mount, the permissions for the staging area directory could get altered after the NFS share has been mounted. In such cases, the permissions(i.e. 770 or 700 whichever applies based on point 3a) must be applied again on the staging area directory.

The directory created in step 3a (‘staging_area’) will be provided as the mountName and the corresponding shared path from the NFS file server as the mountPath in the MountFileSystems API.

  1. After unpacking the tar, you will find the following sample docker-compose files available:docker-compose-sample.yaml, docker-compose-oracle-sample.yaml, docker-compose-mssql-sample.yaml, docker-compose-delimitedfiles-sample.yaml, docker-compose-mongo-sample.yaml, and docker-compose-parquet-sample.yaml. These sample files can be used to create a docker-compose.yaml file based on the connector you want to use. Configure the following docker container volume bindings for the docker containers by editing the docker-compose.yaml:

    1. For each of the docker containers, except the ‘proxy’ container, add a volume entry binding the staging area path (from 3(a), /mnt/hyperscale) to the Hyperscale Compliance Orchestrator container path(/etc/hyperscale) as a volume binding under the ‘volumes’ section.

    2. [Only Required for Oracle Load Service] For the load-service docker container, add a volume entry that binds the path of the directory on the host where both the Oracle instant Client packages were unzipped to the path on the container (/usr/lib/instantclient) under the ‘volumes’ section.

    3. [Only Required for Delimited Unload Service] For Delimited Files unload-service, the source NFS location has to be mounted to the container as docker volume in order for it to access the source files. The path mounted on the container is passed during the creation of the source connector-info.

      CODE
      # Mount your source NFS location onto your Hyperscale Engine server
      sudo mount [-t nfs4] <source_nfs_endpoint>:<source_nfs_location> <nfs_mount_on_host>
      CODE
      # Later mount <nfs_mount_on_host> as a docker volume to the delimited unload-service container (in docker-compose.yaml, created using docker-compose-delimitedfiles-sample.yaml)
      unload-service:
        image: delphix-delimited-unload-service-app:<HYPERSCALE VERSION>
           ...
           volumes:
                ...
           # Source files should be made available within the unload-service container file system
           # The paths within the container should be configured in the source section of connector-info [with type=FS]
           - <nfs_mount_on_host>:<source_files_mount_passed_during_connector_info_creation_path_in_contianer>
    4. [Only Required for Delimited load Service] For Delimited Files load-service, the target NFS location has to be mounted to the container as docker volume in order for it to access the target location where masked files will be placed. The path mounted on the container is passed during the creation of the target connector-info.

      CODE
      # Mount your target NFS location onto your Hyperscale Engine server
      sudo mount [-t nfs4] <target_nfs_endpoint>:<target_nfs_location> <target_nfs_mount_on_host>
      CODE
      # Later mount <nfs_mount_on_host> as a docker volume to the delimited load-service container (in docker-compose.yaml, created using docker-compose-delimitedfiles-sample.yaml)
      load-service:
        image: delphix-delimited-load-service-app:${VERSION}
           ...
           volumes:
                ...
           # Target location should be made available within the load-service container file system
           # The paths within the container should be configured in the target section of connector-info [with type=FS]
           - <target_nfs_mount_on_host>:<target_location_passed_during_connector_info_creation_in_container>
    5. [Optional] Some data (for example, logs, configuration files, etc.) that is generated inside the docker containers may be useful to debug possible errors or exceptions while running the hyperscale jobs, and as such it may be beneficial to persist these logs outside docker containers. The following data can be persisted outside the docker containers:

      1. The logs generated for each service i.e. unload, controller, masking, and load services.

      2. The sqlldr utility logs and control files at opt/sqlldr location in the load-service container.

      3. The file-upload folder at /opt/delphix/uploads in the controller-service container

If you would like to persist the above data on your host, then you have the option to do the same by setting up volume bindings in the respective service as indicated below, that map locations inside the docker containers to locations on the host in the docker-compose.yaml file. The host locations again must have a group ownership id of 50 with a permission mode of 770 or a user id of 65436 with a permission of 700, due to the same reasons as highlighted in step 3a.

Here are examples of the docker-compose.yaml file for Oracle, MS SQL, MongoDB  data sources, Delimited file masking, and Parquet file masking:

  • For Oracle data source masking:

    CODE
    version: "3.7"
    services:
      controller-service:
        image: delphix-controller-service-app:${VERSION}
        healthcheck:
          test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1'
          interval: 30s
          timeout: 25s
          retries: 3
          start_period: 30s
        depends_on:
          - unload-service
          - masking-service
          - load-service
        init: true
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-controller-data:/data
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /home/hyperscale_user/logs/controller_service:/opt/delphix/logs
          - /mnt/hyperscale:/etc/hyperscale
        environment:
          - API_KEY_CREATE=${API_KEY_CREATE:-false}
          - EXECUTION_STATUS_POLL_DURATION=${EXECUTION_STATUS_POLL_DURATION:-12000}
          - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_CONTROLLER_SERVICE:-INFO}
          - API_VERSION_COMPATIBILITY_STRICT_CHECK=${API_VERSION_COMPATIBILITY_STRICT_CHECK:-false}
          - LOAD_SERVICE_REQUIREPOSTLOAD=${LOAD_SERVICE_REQUIRE_POST_LOAD:-true}
          - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=${SKIP_UNLOAD_SPLIT_COUNT_VALIDATION:-false}
          - SKIP_LOAD_SPLIT_COUNT_VALIDATION=${SKIP_LOAD_SPLIT_COUNT_VALIDATION:-false}
          - CANCEL_STATUS_POLL_DURATION=${CANCEL_STATUS_POLL_DURATION:-60000} 
        unload-service:
        image: delphix-unload-service-app:${VERSION}
        init: true
        environment:
          - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_UNLOAD_SERVICE:-INFO}
          - UNLOAD_FETCH_ROWS=${UNLOAD_FETCH_ROWS:-10000}
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-unload-data:/data
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /mnt/hyperscale:/etc/hyperscale
          - /home/hyperscale_user/logs/unload_service:/opt/delphix/logs
      masking-service:
        image: delphix-masking-service-app:${VERSION}
        init: true
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-masking-data:/data
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /mnt/hyperscale:/etc/hyperscale
          - /home/hyperscale_user/logs/masking_service:/opt/delphix/logs
        environment:
          - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_MASKING_SERVICE:-INFO}
          - INTELLIGENT_LOADBALANCE_ENABLED=${INTELLIGENT_LOADBALANCE_ENABLED:-true}
      load-service:
        image: delphix-load-service-app:${VERSION}
        init: true
        environment:
          - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_LOAD_SERVICE:-INFO}
          - SQLLDR_BLOB_CLOB_CHAR_LENGTH=${SQLLDR_BLOB_CLOB_CHAR_LENGTH:-20000}
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-load-data:/data
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /mnt/hyperscale:/etc/hyperscale
          - /opt/oracle/instantclient_21_5:/usr/lib/instantclient
          - /home/hyperscale_user/logs/load_service:/opt/delphix/logs
          - /home/hyperscale_user/logs/load_service/sqlldr:/opt/sqlldr/
      proxy:
        image: delphix-hyperscale-masking-proxy:${VERSION}
        init: true
        networks:
          - hyperscale-net
        ports:
          - "443:443"
        restart: unless-stopped
        depends_on:
          - controller-service
          #volumes:
          # Uncomment to bind mount /etc/config
          #- /nginx/config/path/on/host:/etc/config
    networks:
      hyperscale-net:
    volumes:
      hyperscale-load-data:
      hyperscale-unload-data:
      hyperscale-masking-data:
      hyperscale-controller-data:
  • For MS SQL data source masking:

    CODE
    version: "3.7"
    services:
      controller-service:
        image: delphix-controller-service-app:${VERSION}
        healthcheck:
          test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1'
          interval: 30s
          timeout: 25s
          retries: 3
          start_period: 30s
        depends_on:
          - unload-service
          - masking-service
          - load-service
        init: true
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-controller-data:/data
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /home/hyperscale_user/logs/controller_service:/opt/delphix/logs
          - /mnt/hyperscale:/etc/hyperscale
        environment:
          - API_KEY_CREATE=${API_KEY_CREATE:-false}
          - EXECUTION_STATUS_POLL_DURATION=${EXECUTION_STATUS_POLL_DURATION:-12000}
          - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_CONTROLLER_SERVICE:-INFO}
          - API_VERSION_COMPATIBILITY_STRICT_CHECK=${API_VERSION_COMPATIBILITY_STRICT_CHECK:-false}
          - LOAD_SERVICE_REQUIREPOSTLOAD=${LOAD_SERVICE_REQUIRE_POST_LOAD:-true}
          - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=${SKIP_UNLOAD_SPLIT_COUNT_VALIDATION:-false}
          - SKIP_LOAD_SPLIT_COUNT_VALIDATION=${SKIP_LOAD_SPLIT_COUNT_VALIDATION:-false}
          - CANCEL_STATUS_POLL_DURATION=${CANCEL_STATUS_POLL_DURATION:-60000}
      unload-service:
        image: delphix-mssql-unload-service-app:${VERSION}
        init: true
        environment:
          - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_UNLOAD_SERVICE:-INFO}
          - UNLOAD_FETCH_ROWS=${UNLOAD_FETCH_ROWS:-10000}
          - SPARK_DATE_TIMESTAMP_FORMAT=${DATE_TIMESTAMP_FORMAT:-yyyy-MM-dd HH:mm:ss.SSSS}   
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-unload-data:/data
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /mnt/hyperscale:/etc/hyperscale
          - /home/hyperscale_user/logs/unload_service:/opt/delphix/logs
      masking-service:
        image: delphix-masking-service-app:${VERSION}
        init: true
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-masking-data:/data
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /mnt/hyperscale:/etc/hyperscale
          - /home/hyperscale_user/logs/masking_service:/opt/delphix/logs
        environment:
          - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_MASKING_SERVICE:-INFO}
          - INTELLIGENT_LOADBALANCE_ENABLED=${INTELLIGENT_LOADBALANCE_ENABLED:-true}
      load-service:
        image: delphix-mssql-load-service-app:${VERSION}
        init: true
        environment:
          - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_LOAD_SERVICE:-INFO}
          - SQLLDR_BLOB_CLOB_CHAR_LENGTH=${SQLLDR_BLOB_CLOB_CHAR_LENGTH:-20000}
          - SPARK_DATE_TIMESTAMP_FORMAT=${DATE_TIMESTAMP_FORMAT:-yyyy-MM-dd HH:mm:ss.SSSS}
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-load-data:/data
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /mnt/hyperscale:/etc/hyperscale
          - /home/hyperscale_user/logs/load_service:/opt/delphix/logs
      proxy:
        image: delphix-hyperscale-masking-proxy:${VERSION}
        init: true
        networks:
          - hyperscale-net
        ports:
          - "443:443"
        restart: unless-stopped
        depends_on:
          - controller-service
          #volumes:
          # Uncomment to bind mount /etc/config
          #- /nginx/config/path/on/host:/etc/config
    networks:
      hyperscale-net:
    volumes:
      hyperscale-load-data:
      hyperscale-unload-data:
      hyperscale-masking-data:
      hyperscale-controller-data:
  • For Delimited Files masking – A sample file specific to the Delimited connector is available in the package called docker-compose-delimitedfiles-sample.yaml.

    CODE
    version: "3.7"
    services:
      controller-service:
        image: delphix-controller-service-app:<HYPERSCALE VERSION>
        healthcheck:
          test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1'
          interval: 30s
          timeout: 25s
          retries: 3
          start_period: 30s
        depends_on:
          - unload-service
          - masking-service
          - load-service
        init: true
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-controller-data:/data
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /mnt/parent_staging_area:/etc/hyperscale
        environment:
          - API_KEY_CREATE=true
          - EXECUTION_STATUS_POLL_DURATION=120000
          - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=INFO
          - API_VERSION_COMPATIBILITY_STRICT_CHECK=false
          - LOAD_SERVICE_REQUIREPOSTLOAD=false
          - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=false
          - SKIP_LOAD_SPLIT_COUNT_VALIDATION=false
          - CANCEL_STATUS_POLL_DURATION=60000
          - SOURCE_KEY_FIELD_NAMES=unique_source_files_identifier
      unload-service:
        image: delphix-delimited-unload-service-app:<HYPERSCALE VERSION>
        init: true
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-unload-data:/data
          # Staging area volume mount, here /mnt/parent_staging_area is used as an example
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /mnt/parent_staging_area:/etc/hyperscale
          # Source files should be made available within the unload-service container file system
          # The paths within the container should be configured in the source section of connector-info [with type=FS]
          - /mnt/source_files:/mnt/source
          #- /mnt/source_files2:/mnt/source2
      masking-service:
        image: delphix-masking-service-app:<HYPERSCALE VERSION>
        init: true
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-masking-data:/data
          # Staging area volume mount, here /mnt/parent_staging_area is used as an example
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /mnt/parent_staging_area:/etc/hyperscale
        environment:
          - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=INFO
          - INTELLIGENT_LOADBALANCE_ENABLED=true
      load-service:
        image: delphix-delimited-load-service-app:<HYPERSCALE VERSION>
        init: true
        networks:
          - hyperscale-net
        restart: unless-stopped
        volumes:
          - hyperscale-load-data:/data
          # Staging area volume mount, here /mnt/parent_staging_area is used as an example
          # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
          - /mnt/parent_staging_area:/etc/hyperscale
          # Target location should be made available within the load-service container file system
          # The paths within the container should be configured in the target section of connector-info [with type=FS]
          - /mnt/target_files:/mnt/target
          #- /mnt/target_files2:/mnt/target2
      proxy:
        image: delphix-hyperscale-masking-proxy:<HYPERSCALE VERSION>
        init: true
        networks:
          - hyperscale-net
        ports:
          - "443:443"
          - "80:80"
        restart: unless-stopped
        depends_on:
          - controller-service
    networks:
      hyperscale-net:
    volumes:
      hyperscale-load-data:
      hyperscale-unload-data:
      hyperscale-masking-data:
      hyperscale-controller-data:
  • For MongoDB data source masking:

    CODE
    # Copyright (c) 2021, 2023 by Delphix. All rights reserved.
    version: "3.7"
    services:
        controller-service:
            build:
                context: controller-service
                args:
                  - VERSION=${VERSION}
            image: delphix-controller-service-app:${VERSION}
            healthcheck:
              test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1'
              interval: 30s
              timeout: 25s
              retries: 3
              start_period: 30s
            depends_on:
              - unload-service
              - masking-service
              - load-service
            init: true
            networks:
              - hyperscale-net
            restart: unless-stopped
            volumes:
             - hyperscale-controller-data:/data
             # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
             - /home/hyperscale_user/logs/controller_service:/opt/delphix/logs
             - /mnt/hyperscale:/etc/hyperscale
    
            environment:
              - API_KEY_CREATE=${API_KEY_CREATE:-false}
              - EXECUTION_STATUS_POLL_DURATION=${EXECUTION_STATUS_POLL_DURATION:-120000}
              - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_CONTROLLER_SERVICE:-INFO}
              - API_VERSION_COMPATIBILITY_STRICT_CHECK=${API_VERSION_COMPATIBILITY_STRICT_CHECK:-false}
              - LOAD_SERVICE_REQUIREPOSTLOAD=${LOAD_SERVICE_REQUIRE_POST_LOAD:-true}
              - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=${SKIP_UNLOAD_SPLIT_COUNT_VALIDATION:-false}
              - SKIP_LOAD_SPLIT_COUNT_VALIDATION=${SKIP_LOAD_SPLIT_COUNT_VALIDATION:-false}
              - CANCEL_STATUS_POLL_DURATION=${CANCEL_STATUS_POLL_DURATION:-60000}
              - SOURCE_KEY_FIELD_NAMES=database_name,collection_name
              - VALIDATE_UNLOAD_ROW_COUNT_FOR_STATUS=${VALIDATE_UNLOAD_ROW_COUNT_FOR_STATUS:-false}
              - VALIDATE_MASKED_ROW_COUNT_FOR_STATUS=${VALIDATE_MASKED_ROW_COUNT_FOR_STATUS:-false}
              - VALIDATE_LOAD_ROW_COUNT_FOR_STATUS=${VALIDATE_LOAD_ROW_COUNT_FOR_STATUS:-false}
              - DISPLAY_BYTES_INFO_IN_STATUS=${DISPLAY_BYTES_INFO_IN_STATUS:-true}
              - DISPLAY_ROW_COUNT_IN_STATUS=${DISPLAY_ROW_COUNT_IN_STATUS:-false}
    
        unload-service:
            build:
                context: unload-service
                args:
                  - VERSION=${VERSION}
            image: delphix-mongo-unload-service-app:${VERSION}
            init: true
            environment:
              - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_UNLOAD_SERVICE:-INFO}
              - UNLOAD_FETCH_ROWS=${UNLOAD_FETCH_ROWS:-10000}
              - CONCURRENT_EXPORT_LIMIT=${CONCURRENT_EXPORT_LIMIT:-10}
              - HIKARI_MAX_LIFE_TIME=${UNLOAD_HIKARI_MAX_LIFE_TIME:-1800000}
              - HIKARI_KEEP_ALIVE_TIME=${UNLOAD_HIKARI_KEEP_ALIVE_TIME:-300000}
              - FILE_DELIMITER=${FILE_DELIMITER:-,}
              - FILE_ENCLOSURE=${FILE_ENCLOSURE:-"}
              - FILE_ESCAPE_ENCLOSURE=${FILE_ESCAPE_ENCLOSURE:-"}
           
            networks:
              - hyperscale-net
            restart: unless-stopped
            volumes:
              - hyperscale-unload-data:/data
              # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
              - /mnt/hyperscale:/etc/hyperscale
              - /home/hyperscale_user/logs/unload_service:/opt/delphix/logs
    
        masking-service:
            build:
                context: masking-service
                args:
                  - VERSION=${VERSION}
            image: delphix-masking-service-app:${VERSION}
            init: true
            networks:
              - hyperscale-net
            restart: unless-stopped
            volumes:
              - hyperscale-masking-data:/data
              # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
              - /mnt/hyperscale:/etc/hyperscale
              - /home/hyperscale_user/logs/masking_service:/opt/delphix/logs
            environment:
              - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_MASKING_SERVICE:-INFO}
              - INTELLIGENT_LOADBALANCE_ENABLED=${INTELLIGENT_LOADBALANCE_ENABLED:-true}
        
    load-service:
            build:
                context: load-service
                args:
                  - VERSION=${VERSION}
            image: delphix-mongo-load-service-app:${VERSION}
            init: true
            environment:
              - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_LOAD_SERVICE:-INFO}
              - SQLLDR_BLOB_CLOB_CHAR_LENGTH=${SQLLDR_BLOB_CLOB_CHAR_LENGTH:-20000}
              - HIKARI_MAX_LIFE_TIME=${LOAD_HIKARI_MAX_LIFE_TIME:-1800000}
              - HIKARI_KEEP_ALIVE_TIME=${LOAD_HIKARI_KEEP_ALIVE_TIME:-300000}
           networks:
              - hyperscale-net
            restart: unless-stopped
            volumes:
              - hyperscale-load-data:/data
              # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
              - /mnt/hyperscale:/etc/hyperscale
              - /home/hyperscale_user/logs/load_service:/opt/delphix/logs
        
    proxy:
            build: nginx
            image: delphix-hyperscale-masking-proxy:${VERSION}
            init: true
            networks:
              - hyperscale-net
            ports:
              - "443:443"
              - "80:80"
            restart: unless-stopped
            depends_on:
              - controller-service
            #volumes:
              # Uncomment to bind mount /etc/config
              #- /nginx/config/path/on/host:/etc/config
    networks:
        hyperscale-net:
    volumes:
        hyperscale-load-data:
        hyperscale-unload-data:
        hyperscale-masking-data:
        hyperscale-controller-data:
  • For Parquet files masking – A sample file specific to the Parquet connector is available in the package called docker-compose-parquet-sample.yaml.

    CODE
    version: "3.7"
    services:
      controller-service:
       image: delphix-controller-service-app:<HYPERSCALE VERSION>
       healthcheck:
         test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1'
         interval: 30s
         timeout: 25s
         retries: 3
         start_period: 30s
       depends_on:
         - unload-service
         - masking-service
         - load-service
       init: true
       networks:
         - hyperscale-net
       restart: unless-stopped
       volumes:
         - hyperscale-controller-data:/data
         # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
         - /mnt/parent_staging_area:/etc/hyperscale
       environment:
         - API_KEY_CREATE=true
         - EXECUTION_STATUS_POLL_DURATION=120000
         - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=INFO
         - API_VERSION_COMPATIBILITY_STRICT_CHECK=false
         - LOAD_SERVICE_REQUIREPOSTLOAD=false
         - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=false
         - SKIP_LOAD_SPLIT_COUNT_VALIDATION=false
         - CANCEL_STATUS_POLL_DURATION=60000
         - SOURCE_KEY_FIELD_NAMES=unique_source_files_identifier
      unload-service:
       image: delphix-parquet-unload-service-app:<HYPERSCALE VERSION>
       init: true
       networks:
         - hyperscale-net
       restart: unless-stopped
       volumes:
         - hyperscale-unload-data:/data
         # Staging area volume mount, here /mnt/parent_staging_area is used as an example
         # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
         - /mnt/parent_staging_area:/etc/hyperscale
       environment:
         - MAX_WORKER_THREADS_PER_JOB=512
         # The default AWS region and credentials can be set using environment variables
         #- AWS_DEFAULT_REGION=us-east-1
         #- AWS_ACCESS_KEY_ID=<aws_access_key_id>
         #- AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>
      masking-service:
       image: delphix-masking-service-app:<HYPERSCALE VERSION>
       init: true
       networks:
         - hyperscale-net
       restart: unless-stopped
       volumes:
         - hyperscale-masking-data:/data
         # Staging area volume mount, here /mnt/parent_staging_area is used as an example
         # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
         - /mnt/parent_staging_area:/etc/hyperscale
       environment:
         - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=INFO
         - INTELLIGENT_LOADBALANCE_ENABLED=true
      load-service:
       image: delphix-parquet-load-service-app:<HYPERSCALE VERSION>
       init: true
       networks:
         - hyperscale-net
       restart: unless-stopped
       volumes:
         - hyperscale-load-data:/data
         # Staging area volume mount, here /mnt/parent_staging_area is used as an example
         # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
         - /mnt/parent_staging_area:/etc/hyperscale
       #environment:
         # The default AWS region and credentials can be set using environment variables
         #- AWS_DEFAULT_REGION=us-east-1
         #- AWS_ACCESS_KEY_ID=<aws_access_key_id>
         #- AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>
      proxy:
       image: delphix-hyperscale-masking-proxy:<HYPERSCALE VERSION>
       init: true
       networks:
         - hyperscale-net
       ports:
         - "443:443"
         - "80:80"
       restart: unless-stopped
       depends_on:
         - controller-service
    networks:
     hyperscale-net:
    volumes:
     hyperscale-load-data:
     hyperscale-unload-data:
     hyperscale-masking-data:
     hyperscale-controller-data
  1. [Optional] To modify the default Hyperscale configuration properties for the application, see Configuration Settings.

  2. Run the application from the same location where you extracted the docker-compose.yaml file with the docker-compose up -d command.

    1. To check if the application is running, use the docker-compose ps command. The output of this command should show five containers up and running.

    2. To access application logs of a given container, run the docker logs -f service_container_name> command. The service container name can be received from the output of the docker-compose ps command.

    3. Run the sudo docker-compose down command to stop the application (if required).

  3. Once the application starts, an API key will be generated that will be required to authenticate with the Hyperscale Compliance Orchestrator. This key will be found in the docker container logs of the controller service. You can either look for the key from the controller service logs location that was set as a volume binding in the docker-compose.yaml file or use the docker logs -f <service_container_name> command to retrieve the logs. The service container name can be received from the output of the docker-compose ps command.

    1. The above command displays an output similar to the following where the string NEWLY GENERATED API KEYcan be received from the log:

      CODE
      2022-05-18 12:24:10.981  INFO 7 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]    : Initializing Spring embedded WebApplicationContext
      2022-05-18 12:24:10.982  INFO 7 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 9699 ms
      NEWLY GENERATED API KEY: 1.89lPH1dHSJQwHuQvzawD99sf4SpBPXJADUmJS8v00VCF4V7rjtRFAftGWygFfsqM
    2. To authenticate with the Hyperscale Compliance Orchestrator, you must use the API key and include the HTTP Authorization request header with the type apk; apk <API Key>.

    3. For more information, see the Authentication section under Accessing the Hyperscale Compliance API.

Continuous Compliance Engine Installation

Delphix Continuous Compliance Engine is a multi-user, browser-based web application that provides complete, secure, and scalable software for your sensitive data discovery, masking, and tokenization needs while meeting enterprise-class infrastructure requirements. For information about installing the Continuous Compliance Engine, see Continuous Compliance Engine Installation documentation.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.