Skip to main content
Skip table of contents

Upgrading the Hyperscale Compliance Orchestrator (Podman Compose)

Pre-requisite

Before upgrading, ensure you have downloaded the Hyperscale Compliance x.0.0 (where x.0.0 should be changed to the version of Hyperscale being installed) tar bundle from the Delphix Download website.

How to upgrade the Hyperscale Compliance Orchestrator

Perform the following steps to upgrade the Hyperscale Compliance Orchestrator to the x.0.0 version:

  1. Run cd /<hyperscale_installation_path>/ and podman-compose down to stop and remove all the running containers.

  2. Run the below commands to delete all existing dangling images and Hyperscale images:

    CODE
    podman rmi $(podman images -f "dangling=true" -q)
    podman rmi $(podman images "delphix-hyperscale-masking-proxy" -q)
    podman rmi $(podman images "delphix-controller-service-app" -q)
    podman rmi $(podman images "delphix-masking-service-app" -q)
    podman rmi $(podman images "delphix-*load-service-app" -q)
  3. Remove all files or folders from existing installation directories, except podman-compose.yaml (Keep its backup outside the installation directory so it is not overridden while executing the next step) and the .env file.

  4. Take backup of .env file and untar the patch tar in your existing installation path (where x.0.0 should be changed to the version of Hyperscale being installed). tar -xzvf delphix-hyperscale-masking-x.0.0.tar.gz -C <existing_installation_path>

  5. Replace the podman-compose.yaml supplied with the bundle file with your backed up podman-compose.yaml file from step 3. Alternatively, copy over the differing bits(essentially the volumes and properties for each service) from the backed-up podman-compose.yaml file to the new podman-compose.yaml file supplied in the bundle.

  6. Similarly, either replace the new .env file with the backed up .env file from step 4 and set the VERSION property as x.0.0 (i.e. VERSION=x.0.0) or use the new .env file from the installation bundle and set any properties in it as set in the old .env file.

  7. Run the below commands to load the images(this will configure Oracle-based unload/load setup):

    CODE
    podman load --input controller-service.tar
    podman load --input unload-service.tar
    podman load --input masking-service.tar
    podman load --input load-service.tar
    podman load --input proxy.tar
    • If upgrading from an MSSQL connector setup(supported starting 5.0.0.0 release), instead of running the above commands for load/unload services setup(which are for Oracle), run the below commands(rest remains same for the controller, masking, and proxy services):

      CODE
      podman load --input mssql-unload-service.tar 
      podman load --input mssql-load-service.tar
    • If upgrading from a Delimited/Parquet Files connector setup (supported starting 12.0.0 release), instead of running the above commands for load/unload services setup(which are for Oracle), run the below commands(rest remains same for the controller, masking, and proxy services) after updating new image names in podman-compose.yaml:

      CODE
      podman load --input file-connector-unload-service.tar
      podman load --input file-connector-load-service.tar
    • If upgrading from a MongoDB connector setup (supported starting 13.0.0 release), instead of running the above commands for load/unload services setup (which are for Oracle), run the below commands (rest remains same for the controller, masking, and proxy services):

      CODE
      podman load --input mongo-unload-service.tar
      podman load --input mongo-load-service.tar
  8. Make sure to have the below ports configured under proxy service:

    CODE
    ports:
        - "443:8443"
        - "80:8080"
  9. Ensure your mounts are configured and accessible, before running a job.
    If upgrading to version 24.0 (and onwards) , ensure that the location mounted on the Hyperscale host is the same as the one mapped to /etc/hyperscale in your podman-compose.yaml. If a previous mount exists on another location, unmount it and re-mount to the right directory e.g. if a mount point exists with name staging_area at path /mnt/provision/staging_area , execute the following commands and restart the containers.

    1. If NFS file server is used as staging server execute this command:

      CODE
      sudo umount /mnt/provision/staging_area
      sudo mount -t nfs4 <source_nfs_endpoint>:<source_nfs_location> /mnt/provision
    2. If the NFS Server installation is a Delphix Continuous Data Engine empty VDB you can either:

      1. Append the staging_area path to the volume binding for all the services in podman-compose.yaml. For example:

        CODE
        volumes:
              - /mnt/provision/staging_area:/etc/hyperscale
      2. Alternatively, you can update mount path of Environment on Continuous Data Engine:

        1. Disable the empty VDB(Data Set).

        2. Update path on Environment → Databases page. For example: change path from /mnt/provision/staging_area to mnt/provision/

        3. Enable the empty VDB(Data Set).

        4. Restart Hyperscale containers.

After re-mounting, recheck the permissions of staging area on Hyperscale host. Refer to instruction number 3 on this page: Installation and Setup for required staging area permissions.

Upon application startup, all existing mount-filsystems will be deleted. Please ensure to take backup of the mount setup details, if needed.

If using file connectors, any unload or load jobs in a running state at the time of a container restart are marked as failed.

  1. Run podman-compose up -d to create containers.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.