Skip to main content
Skip table of contents

Installation and setup (Podman Compose)

Delphix highly recommends new installations be performed on Kubernetes.

This section describes the steps you must perform to install the Hyperscale Compliance Orchestrator.

Hyperscale Compliance installation

Pre-requisites

Ensure that you meet the following requirements before you install the Hyperscale Compliance Orchestrator.

  • Download the Hyperscale tar file (delphix-hyperscale-masking-x.0.0.tar.gz) from download.delphix.com (where x.0.0 should be changed to the version of Hyperscale being installed).

  • You must create a user that has permission to install Podman and Podman Compose. 

  • Install Podman on VM. The minimum supported podman version is 4.4.1.

Note: By default, the Podman 4.4.1 version is not available for a few debian-based Linux distributions. For example, you cannot install Podman 4.4.1 on Ubuntu 20.04 and above. The workaround is to use a RHEL machine to host the hyperscale deployment with Podman.

  • Install Podman Compose on the VM. The minimum supported podman-compose version is 1.0.6

  • Check if podman and podman-compose are installed by running the following command:

    • podman-compose -v The above command displays an output similar to the following: podman-compose version 1.0.6

    • podman -v The above command displays an output similar to the following: podman version 4.4.1

  • Podman can not create containers that bind to ports < 1024 (here). Hyperscale’s proxy container binds port 80, 443. Run following command to enable binding or port : sudo sysctl net.ipv4.ip_unprivileged_port_start=80

  • [Only Required for Oracle Load Service] Download and install Linux-based Oracle’s instant client on the machine where the Hyperscale Compliance Orchestrator will be installed. The client should essentially include instantclient-basic (Oracle shared libraries) along with instantclient-tools containing Oracle’s SQL*Loader client. Both the packages instantclient-basic and instantclient-tools should be unzipped in the same directory. A group ownership id of 50 with a permission mode of 550 or a user id of 65436 with a permission mode of 500 must be set recursively on the directory where Oracle’s instant client binaries/libraries will be installed. This is required by the Hyperscale Compliance Orchestrator to be able to read or execute from the directory.

Oracle Load doesn’t support Object Identifiers(OIDs).

Procedure

Perform the following procedure to install the Hyperscale Compliance Orchestrator.

  1. Unpack the Hyperscale tar file (where x.0.0 should be changed to the version of Hyperscale being installed).

tar -xzf delphix-hyperscale-masking-x.0.0.tar.gz

  1. Upon unpacking, you will find the podman image tar files which are categorized as below:

Universal images common for all connectors.

  • controller-service.tar

  • masking-service.tar

  • proxy.tar

      Oracle (required only for Oracle data source masking)

  • unload-service.tar

  • load-service.tar

      MSSQL (required only for MS SQL data source masking)

  • mssql-unload-service.tar

  • mssql-load-service.tar

      File Connector (required only for Delimited and Parquet Files masking)

  • file-connector-unload-service.tar

  • file-connector-load-service.tar

      MongoDB (required only for MongoDB database masking)

  • mongo-unload-service.tar

  • mongo-load-service.tar

Each deployment set consists of 5 images (3 Universal images and 2 images related to each dataset type). Proceed to load the required images into podman as below:

For Oracle data source masking:

CODE
podman load --input unload-service.tar
podman load --input load-service.tar
podman load --input controller-service.tar
podman load --input masking-service.tar
podman load --input proxy.tar

For MS SQL data source masking:

CODE
podman load --input mssql-unload-service.tar
podman load --input mssql-load-service.tar
podman load --input controller-service.tar
podman load --input masking-service.tar
podman load --input proxy.tar

For Delimited and Parquet Files masking using File connector:

CODE
podman load --input file-connector-unload-service.tar
podman load --input file-connector-load-service.tar
podman load --input controller-service.tar
podman load --input masking-service.tar
podman load --input proxy.tar

For MongoDB data source masking:

CODE
podman load --input mongo-unload-service.tar
podman load --input mongo-load-service.tar
podman load --input controller-service.tar
podman load --input masking-service.tar
podman load --input proxy.tar

  1. Create an NFS shared mount, that will act as a Staging Area, on the Hyperscale Compliance Orchestrator host where the Hyperscale Compliance Orchestrator will perform read/write/execute operations.

    1. Create a ‘Staging Area’ directory. For example: /mnt/provision. Note that there is no more need to explicitly create a mount-name directory as required in previous releases. The shared path from the NFS server shall be the same as the value defined for config NFS_STORAGE_EXPORT_PATH. The user(s) within each of the docker containers part of the Hyperscale Compliance Orchestrator and the appliance OS user(s) in the Continuous Compliance Engine(s), all have the user id as 65436 and/or group ownership id as 50. As such, the directory provision, require the following permissions, based on the UID/GID of the OS user, so that the Hyperscale Compliance Orchestrator and the Continuous Compliance Engine(s) can perform read/write/execute operations on the staging area:

      1. If the Hyperscale Compliance OS user has a UID of 65436, then the directory provision, must have a UID of 65436 and 700 permission mode.

      2. If the Hyperscale Compliance OS user has a GID of 50 and does not have a UID of 65436, then the directory provision, must have a GID of 50 and 770 permission mode.

    2. Mount the NFS shared directory on the staging area directory(/mnt/provision). This NFS shared storage can be created and mounted in two ways as detailed in the NFS Server Installation section. Based on the umask value for the user which is used to mount, the permissions for the staging area directory could get altered after the NFS share has been mounted. In such cases, the permissions(i.e. 770 or 700 whichever applies based on point 3a) must be applied again on the staging area directory.

  2. Configure the following container volume bindings for the containers by editing the podman-compose.yaml file from tar:

    1. For each of the containers, except the ‘proxy’ container, add a volume entry binding the staging area path (from 3(a), /mnt/hyperscale) to the Hyperscale Compliance Orchestrator container path(/etc/hyperscale) as a volume binding under the ‘volumes’ section.

    2. [Only Required for Oracle Load Service] For the load-service container, add a volume entry that binds the path of the directory on the host where both the Oracle instant Client packages were unzipped to the path on the container (/usr/lib/instantclient) under the ‘volumes’ section.

    3. [Only Required for File Connector Unload Service] For File connector unload-service, the source NFS location has to be mounted to the container as volume in order for it to access the source files. The path mounted on the container is passed during the creation of the source connector-info.

      CODE
      # Mount your source NFS location onto your Hyperscale Engine server
      sudo mount [-t nfs4] <source_nfs_endpoint>:<source_nfs_location> <nfs_mount_on_host>
      CODE
      # Later mount <nfs_mount_on_host> as a podman volume to the file-connector unload-service container (in podman-compose.yaml, created using podman-compose-delimitedfiles-sample.yaml)
      unload-service:
        image: delphix-file-connector-unload-service-app:<HYPERSCALE VERSION>
           ...
           volumes:
                ...
           # Source files should be made available within the unload-service container file system
           # The paths within the container should be configured in the source section of connector-info [with type=FS]
           - <nfs_mount_on_host>:<source_files_mount_passed_during_connector_info_creation_path_in_contianer>
    4. [Only Required for File Connector load Service] For File Connector load-service, the target NFS location has to be mounted to the container as volume in order for it to access the target location where masked files will be placed. The path mounted on the container is passed during the creation of the target connector-info.

      CODE
      # Mount your target NFS location onto your Hyperscale Engine server
      sudo mount [-t nfs4] <target_nfs_endpoint>:<target_nfs_location> <target_nfs_mount_on_host>
      CODE
      # Later mount <nfs_mount_on_host> as a podman volume to the file-connector load-service container (in podman-compose.yaml, created using podman-compose-delimitedfiles-sample.yaml)
      load-service:
        image: delphix-file-connector-load-service-app:${VERSION}
           ...
           volumes:
                ...
           # Target location should be made available within the load-service container file system
           # The paths within the container should be configured in the target section of connector-info [with type=FS]
           - <target_nfs_mount_on_host>:<target_location_passed_during_connector_in
    5. (Only Required for File Connector load Service) If using the staging push feature, you will need to provide source files on the NFS mount point which is mapped to the staging area. Alternatively, you can also provide format files with the source files. If providing format files, set the environment variable USER_PROVIDED_FORMAT_FILE to true, and by default, pyarrow writer will create the format file. In the below example /source/files/path and staging area must share the same mount point.

      CODE
      
      unload-service:
        image: delphix-file-connector-unload-service-app:${VERSION}
           ...
           volumes:
                ...
           # Source location should be made available within the unload-service container file system
           # The paths within the container should be configured in the source section of connector-info [with type=FS]
           - /source/files/path:/etc/hyperscale/source
      load-service:
        image: delphix-file-connector-load-service-app:${VERSION}
           ...
           volumes:
                ...
           # Target location should be made available within the load-service container file system
           # The paths within the container should be configured in the target section of connector-info [with type=FS]
           - /target/files/path:/etc/hyperscale/target
  3. [Optional] Some data (for example, logs, configuration files, etc.) that is generated inside containers may be useful to debug possible errors or exceptions while running the hyperscale jobs, and as such it may be beneficial to persist these logs outside containers. The following data can be persisted outside the containers:

    1. The logs generated for each service i.e. unload, controller, masking, and load services.

    2. The sqlldr utility logs and control files at opt/sqlldr location in the load-service container.

    3. The file-upload folder at /etc/hyperscale/uploads in the controller-service container

If you would like to persist the above data on your host, then you have the option to do the same by setting up volume bindings in the respective service as indicated below, that map locations inside the containers to locations on the host in the podman-compose.yaml file. The host locations again must have a group ownership id of 50 with a permission mode of 770 or a user id of 65436 with a permission of 700, due to the same reasons as highlighted in step 3a.

Here are examples of the podman-compose.yaml file for Oracle, MS SQL, MongoDB  data sources, Delimited file, and Parquet file masking:

For Oracle data source masking:

CODE
version: "4"
services:
  controller-service:
    image: delphix-controller-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    healthcheck:
      test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1'
      interval: 30s
      timeout: 25s
      retries: 3
      start_period: 30s
    depends_on:
      - unload-service
      - masking-service
      - load-service
    init: true
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-controller-data:/data
      - /home/hyperscale_user/logs/controller_service:/opt/delphix/logs
      - /mnt/hyperscale:/etc/hyperscale
    environment:
      - API_KEY_CREATE=${API_KEY_CREATE:-false}
      - EXECUTION_STATUS_POLL_DURATION=${EXECUTION_STATUS_POLL_DURATION:-12000}
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_CONTROLLER_SERVICE:-INFO}
      - API_VERSION_COMPATIBILITY_STRICT_CHECK=${API_VERSION_COMPATIBILITY_STRICT_CHECK:-false}
      - LOAD_SERVICE_REQUIREPOSTLOAD=${LOAD_SERVICE_REQUIRE_POST_LOAD:-true}
      - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=${SKIP_UNLOAD_SPLIT_COUNT_VALIDATION:-false}
      - SKIP_LOAD_SPLIT_COUNT_VALIDATION=${SKIP_LOAD_SPLIT_COUNT_VALIDATION:-false}
      - CANCEL_STATUS_POLL_DURATION=${CANCEL_STATUS_POLL_DURATION:-60000}
      - NFS_STORAGE_HOST=${NFS_STORAGE_HOST:-}
      - NFS_STORAGE_EXPORT_PATH=${NFS_STORAGE_EXPORT_PATH:-}
      - NFS_STORAGE_MOUNT_TYPE=${NFS_STORAGE_MOUNT_TYPE:-}
      - NFS_STORAGE_MOUNT_OPTION=${NFS_STORAGE_MOUNT_OPTION:-}
      - APPLICATION_NAME=${APPLICATION_NAME:-hs-staging-area} 
    unload-service:
    image: delphix-unload-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    environment:
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_UNLOAD_SERVICE:-INFO}
      - UNLOAD_FETCH_ROWS=${UNLOAD_FETCH_ROWS:-10000}
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-unload-data:/data
      - /mnt/hyperscale:/etc/hyperscale
      - /home/hyperscale_user/logs/unload_service:/opt/delphix/logs
  masking-service:
    image: delphix-masking-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-masking-data:/data
      - /mnt/hyperscale:/etc/hyperscale
      - /home/hyperscale_user/logs/masking_service:/opt/delphix/logs
    environment:
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_MASKING_SERVICE:-INFO}
      - INTELLIGENT_LOADBALANCE_ENABLED=${INTELLIGENT_LOADBALANCE_ENABLED:-true}
  load-service:
    image: delphix-load-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    environment:
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_LOAD_SERVICE:-INFO}
      - SQLLDR_BLOB_CLOB_CHAR_LENGTH=${SQLLDR_BLOB_CLOB_CHAR_LENGTH:-20000}
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-load-data:/data
      - /mnt/hyperscale:/etc/hyperscale
      - /opt/oracle/instantclient_21_5:/usr/lib/instantclient
      - /home/hyperscale_user/logs/load_service:/opt/delphix/logs
      - /home/hyperscale_user/logs/load_service/sqlldr:/opt/sqlldr/
  proxy:
    image: delphix-hyperscale-masking-proxy:${VERSION}
    init: true
    networks:
      - hyperscale-net
    ports:
      - "443:443"
    restart: unless-stopped
    depends_on:
      - controller-service
      #volumes:
      # Uncomment to bind mount /etc/config
      #- /nginx/config/path/on/host:/etc/config
networks:
  hyperscale-net:
volumes:
  hyperscale-load-data:
  hyperscale-unload-data:
  hyperscale-masking-data:
  hyperscale-controller-data:

For MS SQL data source masking: A sample file specific to MS SQL connector is should look like following.

CODE
version: "4"
services:
  controller-service:
    image: delphix-controller-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    healthcheck:
      test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1'
      interval: 30s
      timeout: 25s
      retries: 3
      start_period: 30s
    depends_on:
      - unload-service
      - masking-service
      - load-service
    init: true
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-controller-data:/data
      - /home/hyperscale_user/logs/controller_service:/opt/delphix/logs
      - /mnt/hyperscale:/etc/hyperscale
    environment:
      - API_KEY_CREATE=${API_KEY_CREATE:-false}
      - EXECUTION_STATUS_POLL_DURATION=${EXECUTION_STATUS_POLL_DURATION:-12000}
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_CONTROLLER_SERVICE:-INFO}
      - API_VERSION_COMPATIBILITY_STRICT_CHECK=${API_VERSION_COMPATIBILITY_STRICT_CHECK:-false}
      - LOAD_SERVICE_REQUIREPOSTLOAD=${LOAD_SERVICE_REQUIRE_POST_LOAD:-true}
      - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=${SKIP_UNLOAD_SPLIT_COUNT_VALIDATION:-false}
      - SKIP_LOAD_SPLIT_COUNT_VALIDATION=${SKIP_LOAD_SPLIT_COUNT_VALIDATION:-false}
      - CANCEL_STATUS_POLL_DURATION=${CANCEL_STATUS_POLL_DURATION:-60000}
      - NFS_STORAGE_HOST=${NFS_STORAGE_HOST:-}
      - NFS_STORAGE_EXPORT_PATH=${NFS_STORAGE_EXPORT_PATH:-}
      - NFS_STORAGE_MOUNT_TYPE=${NFS_STORAGE_MOUNT_TYPE:-}
      - NFS_STORAGE_MOUNT_OPTION=${NFS_STORAGE_MOUNT_OPTION:-}
      - APPLICATION_NAME=${APPLICATION_NAME:-hs-staging-area}
  unload-service:
    image: delphix-mssql-unload-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    environment:
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_UNLOAD_SERVICE:-INFO}
      - UNLOAD_FETCH_ROWS=${UNLOAD_FETCH_ROWS:-10000}
      - SPARK_DATE_TIMESTAMP_FORMAT=${DATE_TIMESTAMP_FORMAT:-yyyy-MM-dd HH:mm:ss.SSSS}   
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-unload-data:/data
      - /mnt/hyperscale:/etc/hyperscale
      - /home/hyperscale_user/logs/unload_service:/opt/delphix/logs
  masking-service:
    image: delphix-masking-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-masking-data:/data
      - /mnt/hyperscale:/etc/hyperscale
      - /home/hyperscale_user/logs/masking_service:/opt/delphix/logs
    environment:
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_MASKING_SERVICE:-INFO}
      - INTELLIGENT_LOADBALANCE_ENABLED=${INTELLIGENT_LOADBALANCE_ENABLED:-true}
  load-service:
    image: delphix-mssql-load-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    environment:
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_LOAD_SERVICE:-INFO}
      - SQLLDR_BLOB_CLOB_CHAR_LENGTH=${SQLLDR_BLOB_CLOB_CHAR_LENGTH:-20000}
      - SPARK_DATE_TIMESTAMP_FORMAT=${DATE_TIMESTAMP_FORMAT:-yyyy-MM-dd HH:mm:ss.SSSS}
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-load-data:/data
      - /mnt/hyperscale:/etc/hyperscale
      - /home/hyperscale_user/logs/load_service:/opt/delphix/logs
  proxy:
    image: delphix-hyperscale-masking-proxy:${VERSION}
    init: true
    networks:
      - hyperscale-net
    ports:
      - "443:443"
    restart: unless-stopped
    depends_on:
      - controller-service
      #volumes:
      # Uncomment to bind mount /etc/config
      #- /nginx/config/path/on/host:/etc/config
networks:
  hyperscale-net:
volumes:
  hyperscale-load-data:
  hyperscale-unload-data:
  hyperscale-masking-data:
  hyperscale-controller-data:

For Delimted and Parquet Files masking: A sample file specific to the File connector should look like the following.

CODE
version: "4"
services:
  controller-service:
    image: delphix-controller-service-app:<HYPERSCALE VERSION>
    security_opt:
      - label:disable
    userns_mode: keep-id
    healthcheck:
      test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1'
      interval: 30s
      timeout: 25s
      retries: 3
      start_period: 30s
    depends_on:
      - unload-service
      - masking-service
      - load-service
    init: true
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-controller-data:/data
      - /mnt/parent_staging_area:/etc/hyperscale
    environment:
      - API_KEY_CREATE=true
      - EXECUTION_STATUS_POLL_DURATION=120000
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=INFO
      - API_VERSION_COMPATIBILITY_STRICT_CHECK=false
      - LOAD_SERVICE_REQUIREPOSTLOAD=false
      - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=false
      - SKIP_LOAD_SPLIT_COUNT_VALIDATION=false
      - CANCEL_STATUS_POLL_DURATION=60000
      - SOURCE_KEY_FIELD_NAMES=unique_source_files_identifier
      - NFS_STORAGE_HOST=${NFS_STORAGE_HOST:-}
      - NFS_STORAGE_EXPORT_PATH=${NFS_STORAGE_EXPORT_PATH:-}
      - NFS_STORAGE_MOUNT_TYPE=${NFS_STORAGE_MOUNT_TYPE:-}
      - NFS_STORAGE_MOUNT_OPTION=${NFS_STORAGE_MOUNT_OPTION:-}
      - APPLICATION_NAME=${APPLICATION_NAME:-hs-staging-area}
  unload-service:
    image: delphix-file-connector-unload-service-app:<HYPERSCALE VERSION>
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-unload-data:/data
      # Staging area volume mount, here /mnt/parent_staging_area is used as an example
      - /mnt/parent_staging_area:/etc/hyperscale
      # Source files should be made available within the unload-service container file system
      # The paths within the container should be configured in the source section of connector-info [with type=FS]
      - /mnt/source_files:/mnt/source
      #- /mnt/source_files2:/mnt/source2
      # In case source is hadoop
      #- /path/to/keytab_file/hadoop.keytab:/app/hadoop.keytab
      #- /path/to/hadoop/core-site.xml:/app/hadoop/etc/hadoop/core-site.xml
      #- /path/to/etc/krb5.conf:/etc/krb5.conf
      # In case user want to provide external Hadoop client for pyarrow writer then they can provide
      #- /path/to/hadoop/client/hadoop:/app/hadoop
  masking-service:
    image: delphix-masking-service-app:<HYPERSCALE VERSION>
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-masking-data:/data
      # Staging area volume mount, here /mnt/parent_staging_area is used as an example
      - /mnt/parent_staging_area:/etc/hyperscale
    environment:
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=INFO
      - INTELLIGENT_LOADBALANCE_ENABLED=true
  load-service:
    image: delphix-file-connector-load-service-app:<HYPERSCALE VERSION>
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-load-data:/data
      # Staging area volume mount, here /mnt/parent_staging_area is used as an example
      - /mnt/parent_staging_area:/etc/hyperscale
      # Target location should be made available within the load-service container file system
      # The paths within the container should be configured in the target section of connector-info [with type=FS]
      - /mnt/target_files:/mnt/target
      #- /mnt/target_files2:/mnt/target2
      # In case target is hadoop
      #- /path/to/keytab_file/hadoop.keytab:/app/hadoop.keytab
      #- /path/to/hadoop/core-site.xml:/app/hadoop/etc/hadoop/core-site.xml
      #- /path/to/etc/krb5.conf:/etc/krb5.conf
      # In case user want to provide external Hadoop client for pyarrow writer then they can provide
      #- /path/to/hadoop/client/hadoop:/app/hadoop
  proxy:
    image: delphix-hyperscale-masking-proxy:<HYPERSCALE VERSION>
    init: true
    networks:
      - hyperscale-net
    ports:
      - "443:443"
      - "80:80"
    restart: unless-stopped
    depends_on:
      - controller-service
networks:
  hyperscale-net:
volumes:
  hyperscale-load-data:
  hyperscale-unload-data:
  hyperscale-masking-data:
  hyperscale-controller-data:

For MongoDB data source masking:

CODE
# Copyright (c) 2023, 2024 by Delphix. All rights reserved.
version: "4"
services:
  controller-service:
    build:
      context: controller-service
      args:
        - VERSION=${VERSION}
    image: delphix-controller-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    healthcheck:
      test: 'curl --fail --silent http://localhost:8080/actuator/health | grep UP || exit 1'
      interval: 30s
      timeout: 25s
      retries: 3
      start_period: 30s
    depends_on:
      - unload-service
      - masking-service
      - load-service
    init: true
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-controller-data:/data
      # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
      - /mnt/hyperscale:/etc/hyperscale
    environment:
      - API_KEY_CREATE=${API_KEY_CREATE:-true}
      - EXECUTION_STATUS_POLL_DURATION=${EXECUTION_STATUS_POLL_DURATION:-120000}
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_CONTROLLER_SERVICE:-INFO}
      - API_VERSION_COMPATIBILITY_STRICT_CHECK=${API_VERSION_COMPATIBILITY_STRICT_CHECK:-false}
      - LOAD_SERVICE_REQUIREPOSTLOAD=${LOAD_SERVICE_REQUIRE_POST_LOAD:-true}
      - SKIP_UNLOAD_SPLIT_COUNT_VALIDATION=${SKIP_UNLOAD_SPLIT_COUNT_VALIDATION:-false}
      - SKIP_LOAD_SPLIT_COUNT_VALIDATION=${SKIP_LOAD_SPLIT_COUNT_VALIDATION:-false}
      - CANCEL_STATUS_POLL_DURATION=${CANCEL_STATUS_POLL_DURATION:-60000}
      # update below parameters for creation/updation of mount-filesystem
      - NFS_STORAGE_HOST=${NFS_STORAGE_HOST:-}
      - NFS_STORAGE_EXPORT_PATH=${NFS_STORAGE_EXPORT_PATH:-}
      - NFS_STORAGE_MOUNT_TYPE=${NFS_STORAGE_MOUNT_TYPE:-}
      - NFS_STORAGE_MOUNT_OPTION=${NFS_STORAGE_MOUNT_OPTION:-}
      - APPLICATION_NAME=${APPLICATION_NAME:-hs-staging-area}

      # uncomment below for Delimited files connector
      #- SOURCE_KEY_FIELD_NAMES=unique_source_files_identifier
      # uncomment below for MongoDB connector
      - SOURCE_KEY_FIELD_NAMES=database_name,collection_name
      - VALIDATE_UNLOAD_ROW_COUNT_FOR_STATUS=${VALIDATE_UNLOAD_ROW_COUNT_FOR_STATUS:-false}
      - VALIDATE_MASKED_ROW_COUNT_FOR_STATUS=${VALIDATE_MASKED_ROW_COUNT_FOR_STATUS:-false}
      - VALIDATE_LOAD_ROW_COUNT_FOR_STATUS=${VALIDATE_LOAD_ROW_COUNT_FOR_STATUS:-false}
      - DISPLAY_BYTES_INFO_IN_STATUS=${DISPLAY_BYTES_INFO_IN_STATUS:-true}
      - DISPLAY_ROW_COUNT_IN_STATUS=${DISPLAY_ROW_COUNT_IN_STATUS:-true}

  unload-service:
    build:
      context: unload-service
      args:
        - VERSION=${VERSION}
    image: delphix-mongo-unload-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    environment:
      - CONCURRENT_EXPORT_LIMIT=${CONCURRENT_EXPORT_LIMIT:-10}
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_UNLOAD_SERVICE:-INFO}
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-unload-data:/data
      # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
      - /mnt/hyperscale:/etc/hyperscale

  masking-service:
    build:
      context: masking-service
      args:
        - VERSION=${VERSION}
    image: delphix-masking-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-masking-data:/data
      # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
      - /mnt/hyperscale:/etc/hyperscale
    environment:
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_MASKING_SERVICE:-INFO}
      - INTELLIGENT_LOADBALANCE_ENABLED=${INTELLIGENT_LOADBALANCE_ENABLED:-true}

  load-service:
    build:
      context: load-service
      args:
        - VERSION=${VERSION}
    image: delphix-mongo-load-service-app:${VERSION}
    security_opt:
      - label:disable
    userns_mode: keep-id
    init: true
    environment:
      - LOGGING_LEVEL_COM_DELPHIX_HYPERSCALE=${LOG_LEVEL_LOAD_SERVICE:-INFO}
    networks:
      - hyperscale-net
    restart: unless-stopped
    volumes:
      - hyperscale-load-data:/data
      # The orchestrator VM paths(left side of colon) used here are examples. Configure the respective mount paths.
      - /mnt/hyperscale:/etc/hyperscale
  proxy:
    build: nginx
    image: delphix-hyperscale-masking-proxy:${VERSION}
    init: true
    networks:
      - hyperscale-net
    ports:
      - "443:8443"
      - "80:8080"
    restart: unless-stopped
    depends_on:
      - controller-service
      #volumes:
      # Uncomment to bind mount /etc/config
      #- /nginx/config/path/on/host:/etc/config
networks:
  hyperscale-net:
volumes:
  hyperscale-load-data:
  hyperscale-unload-data:
  hyperscale-masking-data:
  hyperscale-controller-data:
  1. (OPTIONAL) To modify the default Hyperscale configuration properties for the application, see

Configuration Settings.

  1. Run the application from the same location where you extracted the podman-compose.yaml file.

podman-compose up -d

  • Run the following command to check if the application is running. The output of this command should shows five containers up and running.

podman-compose ps

  • Run the following command to access application logs of a given container.

podman logs -f service_container_name>

Service container name can be accessed by output of the command podman-compose ps.

  • Run the following command to stop the application (if required).

podman-compose down

  1. Once the application starts, an API key will be generated that will be required to authenticate with the Hyperscale Compliance Orchestrator. This key will be found in the podman container logs of the controller service. You can either look for the key from the controller service logs location that was set as a volume binding in the podman-compose.yaml file or you could use the following 'podman' command to retrieve the logs.

podman logs -f <service_container_name>

Service container name can be accessed by output of the command podman-compose ps.

The above command displays an output similar to the following where the string NEWLY GENERATED API KEYcan be grepped from the log::

CODE
2022-05-18 12:24:10.981  INFO 7 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]    : Initializing Spring embedded WebApplicationContext
2022-05-18 12:24:10.982  INFO 7 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 9699 ms
NEWLY GENERATED API KEY: 1.89lPH1dHSJQwHuQvzawD99sf4SpBPXJADUmJS8v00VCF4V7rjtRFAftGWygFfsqM

To authenticate with the Hyperscale Compliance Orchestrator, you must use the API key and include the HTTP Authorization request header with the type apk; apk <API Key>.

For more information, see the Authentication section under Accessing the Hyperscale Compliance API.

Continuous Compliance Engine Installation

Delphix Continuous Compliance Engine is a multi-user, browser-based web application that provides complete, secure, and scalable software for your sensitive data discovery, masking, and tokenization needs while meeting enterprise-class infrastructure requirements. For information about installing the Continuous Compliance Engine, see Continuous Compliance Engine Installation documentation.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.