Skip to main content
Skip table of contents

Installation and setup (Kubernetes)

Installation requirements

To deploy Hyperscale Compliance via Kubernetes, a running Kubernetes cluster is required to run, the kubectl command line tool to interact with the Kubernetes cluster and HELM for deployment onto the cluster.

Requirement

Recommended Version

Comments

Kubernetes Cluster

1.25 or above


HELM

3.9.0 or above

HELM installation should support HELM v3. More information on HELM can be found at https://helm.sh/docs/. To install HELM, follow the installation instructions at https://helm.sh/docs/intro/install/.

The installation also requires access to the HELM repository from where Hyperscale charts can be downloaded. The HELM repository URL is https://dlpx-helm-hyperscale.s3.amazonaws.com.

kubectl

1.25.0 or above


If an intermediate HELM repository is to be used instead of the default Delphix HELM repository, then the repository URL, username, and password to access this repository needs to be configured in the values.yaml file under the imageCredentials section.

Installation

Download the HELM charts

The latest version of the chart can be pulled locally with the following command:

curl -XGET https://dlpx-helm-hyperscale.s3.amazonaws.com/hyperscale-helm-8.0.0.tgz -o hyperscale-helm-8.0.0.tgz

This command will download a file with the name hyperscale-helm-8.0.0.tgz in the current working directory. The downloaded file can be extracted using the following command:

tar -xvf hyperscale-helm-8.0.0.tgz

This will extract into the following directory structure:

CODE
hyperscale-helm
    |- values.yaml
    |- README.md
    |- Chart.yaml
    |- templates
        |-<all templates files>

Configure registry credentials for Docker images

For pulling the Docker images from the registry, temporary credentials would need to be configured/overridden in the values.yaml file. For getting the temporary credentials, visit the Hyperscale Compliance Download page and log in with your customer login credentials. Once logged in, select the Hyperscale HELM Repository link and accept the Terms and Conditions. Once accepted, login credentials will be presented. Note them down and edit the imageCredentials.username and imageCredentials.password properties in the values.yaml file as shown below:

CODE
# Credentials to fetch Docker images from Delphix internal repository
      imageCredentials:
# Username to login to docker registry
      username: <username>
# Password to login to docker registry
      password: <password>

imageCredentials:
username: <username>
password: <password>

Override default values in values.yaml

hyperscale-helm is the name of the folder which was extracted in the previous step. In the above directory structure, the values.yaml file contains all of the configurable properties with their default values. These default values can be overridden while deploying Hyperscale Compliance, as per the requirements. If the values.yaml file needs to be overridden, create a copy of values.yaml and edit the required properties. While deploying Hyperscale Compliance, values.yaml file can be overridden using either of the following commands:

NONE
helm install hyperscale-helm -f <path to edited values.yaml> <directory path of the extracted chart>
helm install hyperscale-helm <directory path of the extracted chart> --set <property1>=<value1> --set <property2>=<value2>

Configure Helm chart properties of importance

A few commonly used properties for each service (controller, masking, unload and load) with their default values are listed in values.yaml. You may find details of these properties on the Configuration Settings page. The following sections talk about some of the important properties that will need to be configured correctly for a successful deployment.

Configure the staging area

By default, a path(/dlpxdata) local to the Kubernetes cluster node will be used, via persistent volume claims, to mount the staging area path inside the pods. Override the path by setting up the desired local storage path with the localStoragePath property.

If the cluster needs to mount an NFS shared path that will act as the staging area, override the nfsStorageHost and nfsStorageExportPath properties. 

Configure the correct unload/load images for your dataset type

By default, the helm installation will create the Kubernetes pods for the Oracle unload and load services. To have helm create unload and load pods for the other connector types, for example, MSSQL unload and load services, override the following values in the values.yaml file:

  1. unload.imageName=mssql-unload-service

  2. load.imageName=mssql-load-service

Configure the instantclient path(Applicable for Oracle unload/load): 

By default, a path(/dlpxdata) local to the Kubernetes cluster node will be used, via persistent volume claims, to mount the Oracle instantclient path inside the pods. Override the path by setting up the desired instantclient path with the instantClientStoragePath and instantClientRootDirName properties.

If the cluster needs to mount an NFS shared path that will contain the instantclient binaries, override the nfsInstantClientHost and nfsInstantClientExportPath properties. 

Service level properties:

HELM will internally refer to the kubeconfig file to connect to the Kubernetes cluster. The default kubeconfig file is present at location: ~/.kube/config.

If the kubeconfig file needs to be overridden while running HELM commands, set the KUBECONFIG environment variable to the location of the kubeconfig file.

Configuring the ingress controller

Assuming an ingress controller configuration on the Kubernetes cluster is present when accessing Hyperscale Compliance after the deployment, the ingress controller rule needs to be added for proxy service, along with port 443 (if SSL is enabled) and port 80 (if SSL is disabled).

Additionally, the following annotations will need to be set(this assumes that Kubernetes Ingress NGINX Controller is being used):

  1. nginx.ingress.kubernetes.io/backend-protocol=HTTPS

  2. nginx.ingress.kubernetes.io/proxy-body-size=50m

  3. nginx.ingress.kubernetes.io/proxy-connect-timeout=600

  4. nginx.ingress.kubernetes.io/proxy-read-timeout=600

  5. nginx.ingress.kubernetes.io/proxy-send-timeout=600

If an ingress controller has not been assigned, then a new ingress resource, with the above requirements, can be created with the following kubectl command:

CODE
kubectl create ingress https-ingress --namespace=<namespace-name> --rule="/*=proxy:443" --annotation=nginx.ingress.kubernetes.io/backend-protocol=HTTPS --annotation=nginx.ingress.kubernetes.io/proxy-body-size=50m --annotation=nginx.ingress.kubernetes.io/proxy-connect-timeout=600 --annotation=nginx.ingress.kubernetes.io/proxy-read-timeout=600 --annotation=nginx.ingress.kubernetes.io/proxy-send-timeout=600

Check for the successful installation

After installing the helm chart and setting up the ingress controller, check the status of the helm chart and the pods using the following commands:

CODE
$ helm list
NAME              NAMESPACE    REVISION    UPDATED                                 STATUS      CHART                    APP VERSION
hyperscale-helm      default      1           2023-04-17 05:38:17.639357049 +0000 UTC        deployed    hyperscale-helm-9.0.0
CODE
$ kubectl get pods --namespace=<namespace-name>
NAME                                READY   STATUS              RESTARTS    AGE
proxy-7786995dc6-2pzt9              1/1     Running             1 (81s ago)   2m28s
controller-service-698ddd77fd-wt65x   0/1   ContainerCreating   0           2m28s
masking-service-6c95cf474d-rjl6l    1/1     Running             0           2m28s
unload-service-6c5dcb9f48-mnk4p     1/1     Running             0           2m28s
load-service-7bfc864cb8-2w6mt       1/1     Running             0           2m28s



JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.