Skip to main content
Skip table of contents

Installation and setup (OpenShift)

Installation requirements

To deploy Hyperscale Compliance via Openshift, you will require a running OpenShift cluster, the oc command line tool to interact with the OpenShift cluster, and HELM for deployment onto the cluster.

Requirement

Recommended Version

Comments

oc

4.11.3 or above


HELM

3.9.0 or above

HELM installation should support HELM v3. More information on HELM can be found at https://helm.sh/docs/. To install HELM, follow the installation instructions at https://helm.sh/docs/intro/install/.

The installation also requires access to the HELM repository from where Hyperscale charts can be downloaded. The HELM repository URL is https://dlpx-helm-hyperscale.s3.amazonaws.com.

OpenShift Cluster

4.12 or above


  • If an intermediate HELM repository is to be used instead of the default Delphix HELM repository, then the repository URL, username, and password to access this repository needs to be configured in the values.yaml file under the imageCredentials section.

  • Oracle Load doesn’t support Object Identifiers(OIDs).

Installation process

OC login

Run the OC login command to authenticate OpenShift CLI with the server:

CODE
oc login https://openshift1.example.com -u=<<user_name>> -p=<<password>>

Verify KubeConfig

HELM will use the configuration file inside the $HOME/.kube/ folder to deploy artifacts on an OpenShift cluster. Be sure the config file has the cluster context added, and the current context is set to use this cluster. To verify the context, run this command:

CODE
oc config current-context

Create a new project

Create a new project named hyperscale-services using the command below:

CODE
oc new-project hyperscale-services --description="Hyperscale Deployment project" --display-name="hyperscale-services"

Define SecurityContextConstraints

Hyperscale Compliance services by default run with UID: 65436 GID: 50, so that the files created by Hyperscale Compliance can be read by the Hyperscale Compliance Engine (and vise-versa) which runs UID: 65436 GID: 50.

The default SecurityContextConstraints (SCC) in Openshift makes all hyperscale services run as random UID/GID that breaks the arrangement that Hyperscale has with the Compliance Engine. To make Hyperscale work with a Compliance Engine, you need to create custom SecurityContextConstraints (SCC). 

Here are the (sample) steps to achieve the same. You must perform these tasks only once for a deployment setup. If these steps have already been executed, use the ServiceAccount in values.yaml file.

OC login

Run the OC login command (by providing the administrator username and password) to authenticate OpenShift CLI with the server:

CODE
oc login https://openshift1.example.com -u=<<user_name>> -p=<<password>>

Create SecurityContextConstraints

Create a file (e.g hs-scc.yaml) with the following content:

CODE
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
  name: <SecurityContextConstraints-Name>
allowPrivilegedContainer: false
runAsUser:
  type: MustRunAs
  uid: 65436
seLinuxContext:
  type: RunAsAny
fsGroup:
  type: MustRunAs
  ranges:
    - min: 50
      max: 50

Apply the configuration to create SecurityContextConstraint.

CODE
oc apply -f hs-ssc.yaml

Create ServiceAccount

Run the following command to create a service account.

CODE
oc create sa <service-account-name>

Create Role and Role Binding

Create a file (e.g. hs-role.yaml) with the following content.

CODE
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: hs-role-scc
rules:
  - apiGroups: ["security.openshift.io"]
    resources: ["securitycontextconstraints"]
    resourceNames: ["<SecurityContextConstraints-Name>"]
    verbs: ["use"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: hs-rb-scc
subjects:
  - kind: ServiceAccount
    name: <service-account-name>
roleRef:
  kind: Role
  name: hs-role-scc
  apiGroup: rbac.authorization.k8s.io

Replace <SecurityContextConstraints-Name> and <service-account-name> in the file with the names used in the previous step. Use the following command to apply the configuration to create role and role binding.

CODE
oc apply -f hs-role.yaml

Installation

Download the HELM charts

The latest version of the chart can be pulled locally with the following command (where x.x.x should be changed to the version of Hyperscale being installed):

curl -XGET https://dlpx-helm-hyperscale.s3.amazonaws.com/hyperscale-helm-x.x.x.tgz -o hyperscale-helm-x.x.x.tgz

This command will download a file with the name hyperscale-helm-x.x.x.tgz in the current working directory. The downloaded file can be extracted using the following command (where x.x.x should be changed to the version of Hyperscale being installed):

tar -xvf hyperscale-helm-x.x.x.tgz

This will extract into the following directory structure:

CODE
hyperscale-helm
    ├── Chart.yaml
    ├── README.md
    ├── templates
        │-<all templates files>
    ├── tools
        │-<all tool files>
    ├── values-file-connector.yaml
    ├── values-mongo.yaml
    ├── values-mssql.yaml
    ├── values-oracle.yaml
    └── values.yaml

Verify the authenticity of the downloaded HELM charts

The SHA-256 hash sum of the downloaded helm chart tarball file can be verified as follows:

  1. Execute the below command and note the digest value for version x.x.x (where x.x.x should be changed to the version of Hyperscale being installed)
    curl https://dlpx-helm-hyperscale.s3.amazonaws.com/index.yaml

  2. Execute the sha256sum command (or equivalent) on the downloaded file (where x.x.x should be changed to the version of Hyperscale being installed) (hyperscale-helm-x.x.x.tgz)
    sha256sum hyperscale-helm-x.x.x.tgz

The value generated by the sha256sum utility in step 2 must match the digest value noted in step 1.

Configure Registry Credentials for Docker Images

For pulling the Docker images from the registry, permanent credentials associated with your Delphix account would need to be configured in the values.yaml file. To get these permanent credentials, visit the Hyperscale Compliance Download page and log in with your credentials. Once logged in, select the Hyperscale HELM Repository link and accept the Terms and Conditions. Once accepted, credentials for the docker image registry will be presented. Note them down and edit the imageCredentials.username and imageCredentials.password properties in the values.yaml file as shown below:

CODE
# Credentials to fetch Docker images from Delphix internal repository
      imageCredentials:
# Username to login to docker registry
      username: <username>
# Password to login to docker registry
      password: <password>

Delphix will delete unused credentials after 30 days and inactive (but previously used) credentials after 90 days.

Helm chart configuration files

hyperscale-helm is the name of the folder that was extracted in the previous step. In the above directory structure, there are essentially two files that come into play while attempting to install the helm chart:

  1. A values.yaml configuration file that contains configurable properties, common to all the services, with their default values.  

  2. A values-[connector-type].yaml configuration file that contains configurable properties, applicable to the services of the specific connector, with their default values.  

The following sections talk about some of the important properties that will need to be configured correctly for a successful deployment. A full list of the configurable properties can be found on the Configuration Settings page. 

(Mandatory) Configure the staging area volume

A volume will need to be mounted, via persistent volume claims, inside the pods that will provide access to the staging area for the hyperscale compliance services. This can be configured in one of the following ways that involves setting/overriding some properties in the values.yaml configuration file:

  1. nfsStorageHost and nfsStorageExportPath: Set values for these properties if the cluster needs to mount an NFS shared path from an NFS server.  For information about setting up and configuring an NFS server for the staging area volume, refer to NFS Server Installation.

  • Installing the helm chart with these properties set will create a persistent volume on the cluster. As such, the user installing the helm chart should either be a cluster-admin or should have the privileges to be able to create persistent volume on the cluster.

  • The above parameters are also used to auto-configure the mount-filesystem. Hence, the value for nfsStorageMountType property must also be defined.

  1. stagePvcName: Set this property if the cluster needs to bind the pods to a persistent volume claim. Note that until this PVC is bound to a backing PV, the pods will not start getting created and as such, the cluster admin should ensure that the backing PV is either statically provisioned or dynamically provisioned based on the storage class associated with PVC.

  2. stagePvName and stageStorageClass: Set these properties if the cluster needs to bind the pods to a persistent volume with the associated storage class name. Once the helm chart installation starts, a PVC will be created that is managed by the helm. 

The following properties are supporting/optional properties that can be overridden along with the above properties:

  1. nfsStorageMountOption:  If nfsStorageHost and nfsStorageExportPath have been set, set the appropriate mount option if you would like the cluster to mount with an option other than the default option of nfsvers=4.2.

  2. stageAccessMode and stageStorageSize: Persistent Volume claims can request specific storage capacity size and access modes

(Mandatory for Oracle) Configure the instantclient volume

A volume will need to be mounted, via persistent volume claims, inside the Oracle load service that will provide access to Oracle’s instantclient binaries. This can be configured by one of the following ways that involves setting/overriding some properties in the values-oracle.yaml configuration file:

  1. nfsInstantClientHost and nfsInstantClientExportPath: Set values for these properties if the cluster needs to mount an NFS shared path from an NFS server.

Note: Installing the helm chart with these properties set will create a persistent volume on the cluster. As such, the user installing the helm chart should either be a cluster-admin or should have the privileges to be able to create persistent volume on the cluster.

  1. instantClientPvcName: Set this property if the cluster needs to bind the pods to a persistent volume claim. Note that until this PVC is bound to a backing PV, the pods will not start getting created and as such, the cluster admin should ensure that the backing PV is either manually provisioned or dynamically provisioned based on the storage class associated with PVC.

  2. instantClientPvName and instantClientStorageClass: Set these properties if the cluster needs to bind the pods to a persistent volume with the associated storage class name. Once the helm chart installation starts, a PVC will be created that is managed by the helm. 

The following properties are supporting/optional properties that can be overridden along with the above properties:

  1. instantClientMountOption:  If nfsInstantClientHost and nfsInstantClientExportPath have been set, set the appropriate mount option if you would like the cluster to mount with an option other than the default option of nfsvers=4.2.

  2. instantClientAccessMode and instantClientStorageSize: Persistent Volume claims can request specific storage capacity size and access modes.

(Mandatory for File Connector) Configure the source and target connector type and (optionally) the source and target volumes

UnloadFSMount and loadFSMount (earlier unloadStorageType= FS and loadStorageType = FS):

To use the filesystem source and target connector types, you will need to configure persistent volumes using the nfsUnloadStorage options, then uncomment and set the values of unloadFSMount and loadFSMount to true. If these values are set to true, a volume will need to be mounted, via persistent volume claims, inside the file-connector unload service that will provide access to the source file location and inside the load service that will provide access to the target file location.

UnloadHadoopMount and loadHadoopMount (earlier unloadStorageType= Hadoop and loadStorageType = Hadoop):

To use the Hadoop source and target connector types and configure persistent volumes using the nfsHadoopStorage options, you will need to uncomment and set the values of unloadHadoopMount and loadHadoopMount to true. If these values are set to true, a volume will need to be mounted to add the Hadoop configuration files, via persistent volume claims, inside the file-connector unload and load service that will provide access to the Hadoop configuration files.

These can be configured in one of the following ways that involves setting/overriding some properties in the values-file-connector.yaml configuration file:

  1. nfsUnloadStorageHost, nfsUnloadStorageExportPath, nfsLoadStorageHost, and nfsLoadStorageExportPath: Set values for these properties if the cluster needs to mount an NFS shared path from an NFS server.

Note: Installing the helm chart with these properties set will create a persistent volume on the cluster. As such, the user installing the helm chart should either be a cluster admin or should have the privileges to be able to create persistent volume on the cluster.

  1. unloadStoragePvcName and loadStoragePvcName: Set these properties if the cluster needs to bind the pods to a persistent volume claim. Note that until this PVC is bound to a backing PV, the pods will not start getting created and as such, the cluster admin should ensure that the backing PV is either manually provisioned or dynamically provisioned based on the storage class associated with PVC.

  2. unloadStoragePvName, unloadStorageClass, loadStoragePvName, and loadStorageClass: Set these properties if the cluster needs to bind the pods to a persistent volume with the associated storage class name. Once the helm chart installation starts, a PVC will be created that is managed by the helm. 

The following properties are supporting/optional properties that can be overridden along with the above properties:

  1. unloadStorageMountOption and loadStorageMountOption:  If nfsUnloadStorageHost, nfsUnloadStorageExportPath, nfsLoadStorageHost, and nfsLoadStorageExportPath are configured, set the appropriate mount option that you would like the cluster to use to mount the storage option. Uncomment the line for nfsvers=4.2.

  2. unloadStorageSize, unloadStorageAccessMode, loadStorageSize, and loadStorageAccessMode: Persistent Volume claims can request specific storage capacity size and access modes.

Optionally, if you would like to use PySpark as the data writer type, you may configure it under the unload and load service property values by uncommenting the line and setting dataWriterType: pyspark.

To enable the staging push feature for unload and load services, set skipUnloadWriters and skipLoadWriters to true. Alternatively, you can provide the format file instead of using the staging push feature. To enable this option, set the userProvidedFormatFile to true.

Note: Configurations such as dataWriterType, skipLoadWriters and userProvidedFormatFile can now be configured independently for each job using the source_configs and target_configs in job configuration.

(Optional) Configure the service database volumes

A volume will need to be mounted, via persistent volume claims, inside the pods that will provide the storage for the service databases for each hyperscale compliance service. By default, a persistent volume claim, using the default storage class, will be requested on the cluster. This can be configured, for some or all services, in one of the following ways that involves setting/overriding properties in the values.yaml configuration file:

  1. [service-name].dbPvcName: Set this property if the cluster needs to bind the pods to a persistent volume claim. Note that until this PVC is bound to a backing PV, the pods will not get created and as such, the cluster admin should ensure that the backing PV is either manually provisioned or dynamically provisioned based on the storage class associated with PVC. The service database names default to controller-db , unload-db, masking-db and load-db for the controller, unload, masking and load services respectively.

  2. [service-name].databaseStorageSize: Set this property if the cluster should request a PVC with a storage size to something other than the pre-configured size.

  3. storageClassName: Set this property if the cluster should request a PVC using a specific storage class.

(Optional) Configure the cluster node for each service

By default, pods will be scheduled on the node(s) determined by the cluster. Set a node name under the [service-name].nodeName property for the service(s) if you would like to request the cluster to schedule pods on particular node(s).

Enable Openshift specific properties

  • Update isOpenShift: true for Openshift

  • serviceAccountName: <service-account-name>. This ServiceAccount is created as described in the above section Define SecurityContextConstraints.

(Optional) Set resource requests and limits  

Some users may have default container settings as part of their Kubernetes or OpenShift infrastructure management. Sometimes, it is important to alter those settings for Hyperscale containers. You can configure resource requests and limits for each Hyperscale container like the following:

CODE
controller: 
  resources: 
    requests: 
      memory: "256Mi" 
      cpu: "100m" 
    limit: 
      memory: "512Mi" 
      cpu: "500m" 

The above example is only for controller service. You can configure properties for other services (load, unload and masking) in the same way. Please note that the example above includes sample values, user may need to connect with their infrastructure team to decide these values. 

Install the Helm Chart

Once the desired properties have been set/overridden, proceed to install the helm chart by running:

CODE
helm install hyperscale-helm <directory path of the extracted chart> -f values-[connector-type].yaml

Check for the Successful Installation

After installing the helm chart and setting up the ingress controller, check the status of the helm chart and the pods using the following commands:

CODE
$ helm list
NAME              NAMESPACE    REVISION    UPDATED                                 STATUS      CHART                    APP VERSION
hyperscale-helm      default      1           2023-04-17 05:38:17.639357049 +0000 UTC        deployed    hyperscale-helm-18.0.0
CODE
$ kubectl get pods --namespace=hyperscale-services

NAME                                  READY   STATUS    RESTARTS   AGE

controller-service-65575b6458-2q9b4   1/1     Running   0          125m

load-service-5c644b9cc8-g9fs8         1/1     Running   0          125m

masking-service-7ddfd49c8f-5j2q5      1/1     Running   0          125m

proxy-5bd8d8f589-gkx8g                1/1     Running   0          125m

unload-service-55b5bd8cc8-7z95b       1/1     Running   0          125m


Configure Ingress

Hyperscale Compliance only works with HTTPS Ingress. It does not support HTTP.

Creating route

To create a route, you can use the OpenShift console and create a new one for the Hyperscale service.

  1. Go to Network > Route > Create Route.

  2. Provide the details as shown in the following screenshot.

  3. Click on the Create button. The following screen appears.

  4. Click on the URL in the location column to access hyperscale.


JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.