Skip to main content
Skip table of contents

New features

27.0.0 release

There are no new features in the 27.0.0 release.

26.0.0 release

This release supports the following feature:

File connector enhancements - new source and target parameters
This feature allows you to configure parameters such as writer type, spark configuration settings, and max worker threads for each job without restarting your containers. Previously, variables were set using environment variables, and if a change was made on a parameter, it required a container restart. From this release, you can now apply different settings in parallel because variables are job and not container-specific.
See Kubernetes and OpenShift for instructions on configuring the source and target connector types.

The existing method of setting environment variables is still supported. However, from this 26.0.0 release, if you provide values in both job configs and environment variables, job configs will take precedence.

25.0.0 release

This release supports the following feature/features:

  • File connector enhancements

    • Staging push
      In version 25.0.0, we've introduced a staging push feature that allows users to bypass the unload and load processing steps. This feature allows users to directly provide source files at a mount point mapped to the staging area. Instead of using writers to parse the source files during unload, a link to the files is created within the staging area and passed to the Continuous Compliance engine. During the load operation, the masked files will be available as links in the target location. Note, the staging push feature is restricted to delimited files.

    • Fail running jobs on container restart
      This feature ensures that any unload or load jobs running at the time of a container restart are marked as failed.

  • Oracle connector - “where clause” filter for source dataset
    You can now filter the source data to be processed by a Hyperscale job by applying a “where clause” filter condition in a job’s data set. The “where clause” condition is added in unload SQL queries when fetching the data from a source database. Refer to DataSets API for a sample request.

When applying the "where clause," it is important to note that only filtered data will appear in the target database upon job completion.

24.0.0 release

This release supports the following feature/features:

  • Automated handling of multiple date formats for MSSQL connector
    This release introduces a feature that automatically handles multiple date formats within a single job. This removes the limitation from jobs that require all date/timestamp columns in the dataset to have the same date formats. Also, starting from this release, you no longer need to configure any unload/load date format environment variables to mask date/timestamp columns. If you have the below properties set in the docker-compose.yaml / values.yaml files, these can be safely removed from unload and load services:

    • SPARK_TIMESTAMP_FORMAT 

    • SPARK_DATE_FORMAT 

    • SPARK_SMALL_DATE_TIMESTAMP_FORMAT 

  • AWS S3 as a staging area
    This release introduces the ability to use AWS S3 buckets as staging area for Hyperscale Compliance. This feature is only available for Kubernetes deployment mode. For more information, refer to Configure AWS S3 bucket as staging area and Commonly used properties.

  • Automatic calculation of Unload Split in Oracle Connector
    The Automatic calculation of unload split for Oracle DataSource eliminates the need to manually provide the number of splits. This not only saves time but also makes the product more user-friendly and scalable. See Oracle - Automatic calculation of Unload Split for more details.

  • Resource Requests and Limits for Kubernetes deployment
    Users can now specify Resource requests and limits for Kubernetes deployment. For more information, see Set Resource Requests and Limits

  • /mount-filesystems API removal
    With this release, /mount-filesystems API has been removed. A single MountFileSystem is now set up using the configuration parameters during application startup, deleting all the previously existing instances of the MountFileSystem. New read-only endpoints are available to get/validate the configured staging area details. For more information, refer to Commonly used properties and How to upgrade the Hyperscale Compliance Orchestrator.

  • Application startup will fail if the required properties related to NFS Server/AWS S3 bucket are not specified in the corresponding yaml/.env file to configure the mount point. Controller-service logs shall indicate error message like below:

CODE
ERROR o.s.boot.SpringApplication.reportFailure - Application run failed
com.delphix.hyperscale.common.exception.UserException: Specify all of the following configurations: 'NFS_STORAGE_HOST', 'NFS_STORAGE_EXPORT_PATH', 'NFS_STORAGE_MOUNT_TYPE'.
  • In case of upgrade :

    • Existing MountFileSystem information will be deleted on application startup. Please ensure to take backup of the mount information, if needed.

    • Property APPLICATION_NAME value should be same as mount-name used for job executions.

    • Upgrade on Docker deployments would need re-mounting of NFS path. Refer to Upgrading the Hyperscale Compliance Orchestrator (Docker Compose) for details.

23.0.0 release

This release supports the following feature/features:

  • File connector
    This release unifies the Delimited and Parquet connectors into a single, versatile file connector. All existing capabilities of both connectors are preserved. This consolidation eliminates the need for separate deployments for delimited and parquet file formats. The upgrade path for Parquet/Delimited connectors can be found here.

  • Hadoop Support through File connector
    This release allows you to seamlessly integrate Hadoop File Systems as both data source and destination.

  • Automated handling of multiple date formats for Oracle connector
    This release introduces a feature that automatically handles multiple date formats within a single job. This removes the limitation from jobs that require all date/timestamp columns in the dataset to have the same date formats. Also starting this release, the user no longer needs to configure any unload/load date format environment variables to mask date/timestamp columns. In case users have the below properties set in the docker-compose.yaml / values.yaml file, these can be safely removed :

    • Unload service - JDBC_DATE_TIMESTAMP_FORMAT

    • Load service - SQLLDR_DATE_TIMESTAMP_FORMAT

  • Performance improvement in MS SQL connector

    • Unload
      This release enhances the performance when using date or datetime data types as filter keys. This improves the performance of unloading data from the source database when using filter keys of types date, datetime, datetime2, smalldatetime, or time.

    • Post load

      • This release enhances the performance of post load process. While enabling constraints and indexes, the creation of CLUSTERED and NON_CLUSTERED indexes in tables is optimized by ensuring the correct order.

      • This release introduces connection pooling for the target database. Hyperscale Compliance now maintains a connection pool for post load process with a default pool size of 15. You can refer to the Configuration Settings page to customize the pool size property.

22.0.0 release

This release supports the following feature/features:

  • Automated handling of MSSQL partitioned indexes
    This release automates the handling of MSSQL partitioned indexes (clustered and non-clustered), ensuring that partition indexes are automatically dropped before load starts and created again once data is loaded back in target database. Dropping of partitioned indexes (clustered and non-clustered) before load process starts will also have positive imapct on overall Hyperscale job performance.

  • Handling of WARNING status for Continuous Compliance Jobs
    Continuous Compliance now includes a new Job Execution Status: WARNING. Hyperscale Compliance has been updated to handle this new status. The behavior of Hyperscale Jobs with Continuous Compliance jobs in WARNING status is controlled by a newly introduced flag, i.e. consider_continuous_compliance_warning_event_as. This flag has been added to the Hyperscale Job configurations. For configuration details and examples, refer to the Jobs API section on How to setup a Hyperscale Compliance job page.

21.0.0 release

This release supports the following feature/features:

  • Automated handling of Oracle BLOB/CLOB Columns
    This release automates the handling of Oracle BLOB/CLOB columns, ensuring that null or empty string values are managed seamlessly. With this update, user intervention is no longer required to determine how a CLOB or BLOB value should load when null or empty. The target database will now accurately match the source database's values, whether null or empty. Consequently, the load_empty_lobs_as_null property under target_config in the Job API has been removed due to this automation.

  • Automated configuration of mount filesystem
    This release introduces new configuration settings that will enable the automatic configuration of the mount filesystem during application startup. With the availability of these new configuration parameters, the /mount-filesystems API is deprecated as of this release and will be removed in future releases. For more information, refer to API Flow to Setup a Hyperscale Compliance Job and Commonly Used Properties.

  • MongoDB connector enhancements

    • The MongoDB Hyperscale connector now supports Reduced Privilege Operations. This enhancement eliminates the clusterAdmin and clusterMonitor privileges requirement when the drop_collection parameter is set to “No” on the target datasets.

  • Deployment platform certifications

    • AWS EKS - Parquet and Delimited connectors

    • Openshift - MongoDB connector

20.0.0 release

This release supports the following feature/features:

  • MongoDB connector enhancements

    • Job cancellation - This release introduces an end-to-end job cancellation feature for the MongoDB connector, allowing you to cancel any Hyperscale job execution. All active unload, masking, and load tasks will be canceled.

    • Support bundle API - The MongoDB connector now supports generating a support bundle through APIs.

This feature is available for Oracle, MSSQL, and MongoDB connectors. For the Delimited and Parquet connectors, the generated bundle will have information for only controller and masking services.

  • Deployment platform certifications

    • AWS EKS - Oracle and MSSQL connector

19.0.0 release

This release supports the following feature/features:

  • Support for AWS S3 as source and target locations for Delimited connector
    This release supports AWS S3 as source and target locations for the Delimited connector. You can now provide the S3 bucket path as your source and target locations in addition to the already supported mounted filesystem(FS) if required.

  • Added APIs to generate support-bundle
    This release introduces substantial enhancements to the Hyperscale platform, empowering you to generate a support bundle efficiently through an API. This process will now operate asynchronously, delivering a more streamlined solution compared to the previous script-based method. For more information, refer to How to generate a support bundle.

This feature is available for Oracle and MSSQL connectors. For Mongo, Delimited, and Parquet connectors, the generated bundle will have information for only controller and masking services.

  • Added support to sync masking jobs with structured data
    This release brings significant improvements to the Hyperscale job sync functionality. You can now import jobs with rulesets containing structured data applied to data columns. As a result, you will receive a connector, structuredDataFormats, and a dataset ready for immediate use.

  • OpenShift deployment for Hyperscale and Enhanced volume management
    This release introduces the capability to deploy the Hyperscale Compliance Orchestrator on an OpenShift cluster. In addition, multiple customization options have been added to allow you to use persistent volumes with different customer needs.

Hyperscale Compliance deployment on an OpenShift cluster is supported only for Oracle, MSSQL, Parquet and Delimited Connector.

18.0.0 release

This release supports the following feature/features:

  • Support for mounted filesystems as source and target locations for Parquet connector
    This release provides support for mounted filesystems for Parquet connectors. You can now provide mounted filesystems as their source and target locations in addition to the already supported AWS S3 buckets if you wish to do so.

  • Support for sharded MongoDB Atlas database
    This release provides support for Sharded MongoDB databases (Atlas and on-prem) for Mongo connectors. The information within sharded collections from the source database can be masked and transferred into the sharded target database by ensuring the utilization of identical shard keys.

  • Ability to provide a name for the connector
    This release enhances the connector API to provide a name. For more information, refer Hyperscale Compliance API.

17.0.0 release

This release supports the following feature/features:

  • Addition of Apache Spark as data writer in Delimited Files connector
    The Delimited files connector now comes with PySpark (Apache Spark) to split the large files with added advantage. You can now select the backend data writer that works for your use case. To know more about these choices, refer to Delimited Files Connector.

16.0.0 release

This release supports the following feature/features:

  • Introduction of the Parquet connector
    This release introduces a Parquet connector that can be used to mask large Parquet files available on AWS S3 buckets. The connector splits the large files into smaller chunks, passes them to the masking service, and joins them back on the target location.

15.0.0 release

This release supports the following feature/features:

  • This release adds an option to configure whether you want to load empty values of Oracle BLOB/CLOB column as Null or empty string. A new property load_empty_lobs_as_null has been added under the 'target_config' job to configure the same. It can be configured at the job level. The default value of this property is false (i.e., load as empty string).

14.0.0 release

This release supports the following feature/features:

  • Oracle: Lookup references of a table recursively
    In this release, we have improved the pre-load process. Now, the pre-load process has been enhanced to perform recursive searches for table references until the final reference is found.

  • Support for masking structured data(XML/JSON) embedded in a database column
    This release introduces the support for masking the field values in XML / JSON data stored as CLOB in a database column. This has been achieved with the addition of a new entity called structured-data-format to the Hyperscale.

13.0.0 release

This release supports the following feature/features:

  • Job Cancellation - Phase 5
    This release provides an end-to-end Job Cancellation feature, offering you to cancel any Hyperscale Job Execution. All the running processes of Unload, Masking, Load, and Post Load Task will be canceled.

  • Introduction of MongoDB Connector
    This connector enhances data security by seamlessly masking extensive MongoDB collections. This ensures the continued usability of data while providing robust protection for sensitive information. The connector now offers enhanced export capabilities, allowing you to split collections based on their preferences. Data masking is seamlessly applied through the dedicated masking service, and the masked data is seamlessly imported into the designated target collection.

  • This release introduces a separate artifact known as the MongoDB Profiler that can be downloaded from the same location as Hyperscale. It is an optional tool designed for profiling MongoDB collection data, generating an inventory of sensitive columns, and submitting the payload to the dataset API. The Profiler artifact includes a README file that provides detailed usage instructions. For information on the format of the dataset API payload, refer to the DataSets API section in this document.

  • New execution endpoint
    This release adds a new API GET endpoint (/executions/summary) to the existing JobExecution API to get the summarised overview of all the Executions of a particular Hyperscale Job.

12.0.0 release

This release supports the following feature/features:

  • Job cancellation - Phase 4
    This release adds the capability for users to cancel a Hyperscale Job Execution once the unload task of the execution has been finished. This will result in the cancellation of all ongoing processes related to masking and load tasks.

  • Introduction of Delimited Files Connector
    This release introduces a delimited files connector that can be used to mask large delimited flat files (with a delimiter of single character length) available on an NFS location. The connector splits the large into user-provided chunks, passes it to the masking service, and joins back.

  • In this release, we're excluding pre-load and post-load processes for empty tables, leading to enhanced performance in scenarios where datasets contain empty tables.

11.0.0 release

This release supports the following feature/features:

  • Job cancellation - Phase 3
    With this release, you can cancel the MSSQL Hyperscale Job Execution while the load task of the execution is running.

  • Improvements in the Hyperscale job sync feature

    • This release introduces a new API endpoint PUT /import/{datasetId} to update the existing dataset and connector on the Hyperscale Compliance Orchestrator with refreshed ruleset from the Continuous Compliance Engine.

    • This release provides a utility script to automate the process of creating/updating a dataset and connector at Hyperscale, by exporting a masking job from the Continuous Compliance engine.

For more details, refer to How to Sync a Hyperscale Job documentation.

  • Oracle NLS Support for Japanese character set
    This release adds Oracle NLS Support for the Japanese character set.

10.0.0 release

This release supports the following feature/features:

  • Job cancellation - Phase 2
    With this release, you can cancel an Oracle Hyperscale Job Execution while the Load task of the execution is running. This feature is not available for MSSQL connectors.

9.0.0 release

This release supports the following feature/features:

  • Job cancellation - Phase 1
    With this release, you can cancel a Hyperscale Job Execution while the Post Load task of the execution is running. For more details, refer to the How to Cancel a Hyperscale Job documentation.

8.0.0 release

This release supports the following feature/features:

  • Support for Kubernetes deployment
    This release introduces the capability to deploy the Hyperscale Compliance Orchestrator on a single-node Kubernetes cluster by using the helm charts. For more details, refer to the Installation and setup (Kubernetes) documentation.

  • Enhanced post-load task status
    This release provides you with comprehensive visibility into the progress status of the post-load task (for example, ongoing process for constraints, indexes, and triggers) using the execution API endpoints. For more details, refer to the   How to Setup a Hyperscale Compliance Job documentation.

  • Oracle connector - degree of parallelism for recreating indexes
    This release provides you the ability to specify and configure the degree of parallelism(DOP) per Oracle job to recreate the index in the post-load process. Currently, the recreate index DDL utilizes only the default Degree of Parallelism set by the oracle but now you can specify the custom value that can enhance the performance of the index recreation process. For more details, refer to the Hyperscale Compliance API documentation.

  • Introduction of semantic versioning (Major.Minor.Patch)
    With this release, Hyperscale introduced support for Kubernetes deployment through helm charts. As helm charts support 3 part Semantic Versioning, hence release 8.0.0 onwards Hyperscale will also follow the 3 part Semantic Versioning instead of 4 parts semantic versioning.

7.0.0.0 release

This release supports the following feature/features:

  • Performance improvement
    This release introduces impactful changes aimed at enhancing the performance of the masking task within the Hyperscale Job, ultimately resulting in improved overall job performance. The following key changes have been implemented:

    • Changes in Masking Service to increase the Compliance Engine utilization by Hyperscale.

    • Masking Service will no more process tables having 0 rows.

  • Oracle
    This release supports tables with subpartition indexes during load.

6.0.0.0  release

This release supports the following feature/features:

  • New file based endpoints

    • file-download: This release introduces a new API endpoint to download the execution and dataset responses. For more information, refer to the Hyperscale Compliance API documentation.

    • file-upload: This release introduces a new API endpoint to upload a file that currently can be used to create or update the dataset using POST /data-sets/file-upload and PUT /data-sets/file-upload/{dataSetId} endpoints. For more information, refer to the Hyperscale Compliance API documentation.

  • MSSQL database scalability improvements
    This release improves the overall job performance by adding the handling of primary key constraints.

5.0.0.1 releases

5.0.0.1 is a patch release specifically aimed at addressing a critical bug. For more information, see Fixed issues .

5.0.0.0 release

This release supports the following feature/features:

  • MS SQL connector
    This release adds the MS SQL connector implemented as separate services that include unload and load services. These connector services enable Hyperscale Compliance for MS SQL databases.

  • New execution endpoints
    This release adds a new API GET endpoint (/executions/{id}/summary) to the existing JobExecution API to get the overview of the progress of a Job Execution. In addition to this, the existing GET /executions/{id} endpoint has been extended to have additional filters based on the task name and the task's metadata object's status. For more information, refer to the Execution API in the Hyperscale Compliance API section.

  • Multi-column algorithm support
    With this release, Multi-Column Algorithms can also be provided in the /data-sets endpoint. For more information, refer to the Dataset API in the Hyperscale Compliance API section. Additionally, existing Continuous Compliance jobs containing multi-column algorithm-related fields can now be imported via/import endpoint.

4.1.0 release

This release supports the following feature/features:

  • Capability to limit the number of connections
    This release adds the capability to limit the number of connections to the source and target databases using the new API parameters as Max_concurrent_source_connection and Max_concurrent_target_connection under new source_configs and target_configs respectively. Using this property, you can fine-tune the number of connections as per source target infra to get better performance. For more information, refer to the Hyperscale Compliance API documentation.

  • Increased API object limit
    This release increases the API object limit from 1000 to 10000.

4.0.0 release

This release supports the following feature/features:

  • Hyperscale job sync
    This release adds the ability to:

  • Add configuration properties through .env file
    This release adds an additional capability to override commonly used configuration properties through the .env file. You can now update application properties in this file before starting the application. For more information, refer to the Configuration settings section.

3.0.0.1 release

3.0.0.1 is a patch release specifically aimed at addressing critical bugs. For more information, see Fixed issues .

3.0.0.0 release

This release supports the following feature/features:

  • Oracle connector
    This release includes the Oracle connector implemented as separate services, including unload and load services. These connector services enable Hyperscale Compliance for Oracle databases.

  • Parallel processing of tables
    This release processes all tables provided through the data-set API in parallel through the four operational stages - unload, masking, upload, and post-load to minimize the total time it takes to mask the complete data set.

  • Monitoring
    This release provides monitoring APIs so that you can track the progress of tables in your data set through the unload, masking, upload, and post-load phases. This API also provides a count of rows being processed through different stages.

  • Restartability
    This release includes the ability to restart a failed process.

  • Clean up
    This release supports cleaning data from previous job execution.

2.0.0.1 release

2.0.0.1 is a patch release specifically aimed at addressing critical bugs and has the following updates:

  • Upgraded spring boot version to 2.5.12.

  • Minor view-only changes in swagger-based API client.

2.0.0 release

2.0.0 is the initial release of Hyperscale Compliance. Hyperscale Compliance is an API-based interface that is designed to enhance the performance of masking large datasets. It allows you to achieve faster masking results using the existing Delphix Continuous Compliance offering without adding the complexity of configuring multiple jobs.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.