Skip to main content
Skip table of contents

New features

19.0.0 release

This release supports the following feature/features:

  • Support for AWS S3 as source and target locations for Delimited connector
    This release supports AWS S3 as source and target locations for the Delimited connector. You can now provide the S3 bucket path as your source and target locations in addition to the already supported mounted filesystem(FS) if required.

  • Added APIs to generate support-bundle
    This release introduces substantial enhancements to the Hyperscale platform, empowering you to generate a support bundle efficiently through an API. This process will now operate asynchronously, delivering a more streamlined solution compared to the previous script-based method. For more information, refer to How to generate a support bundle.

This feature is available for Oracle and MSSQL connectors. For Mongo, Delimited, and Parquet connectors, the generated bundle will have information for only controller and masking services.

  • Added support to sync masking jobs with structured data
    This release brings significant improvements to the Hyperscale job sync functionality. You can now import jobs with rulesets containing structured data applied to data columns. As a result, you will receive a connector, structuredDataFormats, and a dataset ready for immediate use.

  • OpenShift deployment for Hyperscale and Enhanced volume management
    This release introduces the capability to deploy the Hyperscale Compliance Orchestrator on an OpenShift cluster. In addition, multiple customization options have been added to allow you to use persistent volumes with different customer needs.

Hyperscale Compliance deployment on an OpenShift cluster is supported only for Oracle, MSSQL, Parquet and Delimited Connector.

18.0.0 release

This release supports the following feature/features:

  • Support for mounted filesystems as source and target locations for Parquet connector
    This release provides support for mounted filesystems for Parquet connectors. You can now provide mounted filesystems as their source and target locations in addition to the already supported AWS S3 buckets if you wish to do so.

  • Support for sharded MongoDB Atlas database
    This release provides support for Sharded MongoDB databases (Atlas and on-prem) for Mongo connectors. The information within sharded collections from the source database can be masked and transferred into the sharded target database by ensuring the utilization of identical shard keys.

  • Ability to provide a name for the connector
    This release enhances the connector API to provide a name. For more information, refer Hyperscale Compliance API.

17.0.0 release

This release supports the following feature/features:

  • Addition of Apache Spark as data writer in Delimited Files connector
    The Delimited files connector now comes with PySpark (Apache Spark) to split the large files with added advantage. You can now select the backend data writer that works for your use case. To know more about these choices, refer to Delimited Files Connector.

16.0.0 release

This release supports the following feature/features:

  • Introduction of the Parquet connector
    This release introduces a Parquet connector that can be used to mask large Parquet files available on AWS S3 buckets. The connector splits the large files into smaller chunks, passes them to the masking service, and joins them back on the target location.

15.0.0 release

This release supports the following feature/features:

  • This release adds an option to configure whether you want to load empty values of Oracle BLOB/CLOB column as Null or empty string. A new property load_empty_lobs_as_null has been added under the 'target_config' job to configure the same. It can be configured at the job level. The default value of this property is false (i.e., load as empty string).

14.0.0 release

This release supports the following feature/features:

  • Oracle: Lookup references of a table recursively
    In this release, we have improved the pre-load process. Now, the pre-load process has been enhanced to perform recursive searches for table references until the final reference is found.

  • Support for masking structured data(XML/JSON) embedded in a database column
    This release introduces the support for masking the field values in XML / JSON data stored as CLOB in a database column. This has been achieved with the addition of a new entity called structured-data-format to the Hyperscale.

13.0.0 release

This release supports the following feature/features:

  • Job Cancellation - Phase 5
    This release provides an end-to-end Job Cancellation feature, offering you to cancel any Hyperscale Job Execution. All the running processes of Unload, Masking, Load, and Post Load Task will be canceled.

  • Introduction of MongoDB Connector
    This connector enhances data security by seamlessly masking extensive MongoDB collections. This ensures the continued usability of data while providing robust protection for sensitive information. The connector now offers enhanced export capabilities, allowing you to split collections based on their preferences. Data masking is seamlessly applied through the dedicated masking service, and the masked data is seamlessly imported into the designated target collection.

  • This release introduces a separate artifact known as the MongoDB Profiler that can be downloaded from the same location as Hyperscale. It is an optional tool designed for profiling MongoDB collection data, generating an inventory of sensitive columns, and submitting the payload to the dataset API. The Profiler artifact includes a README file that provides detailed usage instructions. For information on the format of the dataset API payload, refer to the DataSets API section in this document.

  • New execution endpoint
    This release adds a new API GET endpoint (/executions/summary) to the existing JobExecution API to get the summarised overview of all the Executions of a particular Hyperscale Job.

12.0.0 release

This release supports the following feature/features:

  • Job cancellation - Phase 4
    This release adds the capability for users to cancel a Hyperscale Job Execution once the unload task of the execution has been finished. This will result in the cancellation of all ongoing processes related to masking and load tasks.

  • Introduction of Delimited Files Connector
    This release introduces a delimited files connector that can be used to mask large delimited flat files (with a delimiter of single character length) available on an NFS location. The connector splits the large into user-provided chunks, passes it to the masking service, and joins back.

  • In this release, we're excluding pre-load and post-load processes for empty tables, leading to enhanced performance in scenarios where datasets contain empty tables.

11.0.0 release

This release supports the following feature/features:

  • Job cancellation - Phase 3
    With this release, you can cancel the MSSQL Hyperscale Job Execution while the load task of the execution is running.

  • Improvements in the Hyperscale job sync feature

    • This release introduces a new API endpoint PUT /import/{datasetId} to update the existing dataset and connector on the Hyperscale Compliance Orchestrator with refreshed ruleset from the Continuous Compliance Engine.

    • This release provides a utility script to automate the process of creating/updating a dataset and connector at Hyperscale, by exporting a masking job from the Continuous Compliance engine.

For more details, refer to How to Sync a Hyperscale Job documentation.

  • Oracle NLS Support for Japanese character set
    This release adds Oracle NLS Support for the Japanese character set.

10.0.0 release

This release supports the following feature/features:

  • Job cancellation - Phase 2
    With this release, you can cancel an Oracle Hyperscale Job Execution while the Load task of the execution is running. This feature is not available for MSSQL connectors.

9.0.0 release

This release supports the following feature/features:

  • Job cancellation - Phase 1
    With this release, you can cancel a Hyperscale Job Execution while the Post Load task of the execution is running. For more details, refer to the How to Cancel a Hyperscale Job documentation.

8.0.0 release

This release supports the following feature/features:

  • Support for Kubernetes deployment
    This release introduces the capability to deploy the Hyperscale Compliance Orchestrator on a single-node Kubernetes cluster by using the helm charts. For more details, refer to the Installation and setup (Kubernetes) documentation.

  • Enhanced post-load task status
    This release provides you with comprehensive visibility into the progress status of the post-load task (for example, ongoing process for constraints, indexes, and triggers) using the execution API endpoints. For more details, refer to the   How to Setup a Hyperscale Compliance Job documentation.

  • Oracle connector - degree of parallelism for recreating indexes
    This release provides you the ability to specify and configure the degree of parallelism(DOP) per Oracle job to recreate the index in the post-load process. Currently, the recreate index DDL utilizes only the default Degree of Parallelism set by the oracle but now you can specify the custom value that can enhance the performance of the index recreation process. For more details, refer to the Hyperscale Compliance API documentation.

  • Introduction of semantic versioning (Major.Minor.Patch)
    With this release, Hyperscale introduced support for Kubernetes deployment through helm charts. As helm charts support 3 part Semantic Versioning, hence release 8.0.0 onwards Hyperscale will also follow the 3 part Semantic Versioning instead of 4 parts semantic versioning.

7.0.0.0 release

This release supports the following feature/features:

  • Performance improvement
    This release introduces impactful changes aimed at enhancing the performance of the masking task within the Hyperscale Job, ultimately resulting in improved overall job performance. The following key changes have been implemented:

    • Changes in Masking Service to increase the Compliance Engine utilization by Hyperscale.

    • Masking Service will no more process tables having 0 rows.

  • Oracle
    This release supports tables with subpartition indexes during load.

6.0.0.0  release

This release supports the following feature/features:

  • New file based endpoints

    • file-download: This release introduces a new API endpoint to download the execution and dataset responses. For more information, refer to the Hyperscale Compliance API documentation.

    • file-upload: This release introduces a new API endpoint to upload a file that currently can be used to create or update the dataset using POST /data-sets/file-upload and PUT /data-sets/file-upload/{dataSetId} endpoints. For more information, refer to the Hyperscale Compliance API documentation.

  • MSSQL database scalability improvements
    This release improves the overall job performance by adding the handling of primary key constraints.

5.0.0.1 releases

5.0.0.1 is a patch release specifically aimed at addressing a critical bug. For more information, see Fixed issues .

5.0.0.0 release

This release supports the following feature/features:

  • MS SQL connector
    This release adds the MS SQL connector implemented as separate services that include unload and load services. These connector services enable Hyperscale Compliance for MS SQL databases.

  • New execution endpoints
    This release adds a new API GET endpoint (/executions/{id}/summary) to the existing JobExecution API to get the overview of the progress of a Job Execution. In addition to this, the existing GET /executions/{id} endpoint has been extended to have additional filters based on the task name and the task's metadata object's status. For more information, refer to the Execution API in the Hyperscale Compliance API section.

  • Multi-column algorithm support
    With this release, Multi-Column Algorithms can also be provided in the /data-sets endpoint. For more information, refer to the Dataset API in the Hyperscale Compliance API section. Additionally, existing Continuous Compliance jobs containing multi-column algorithm-related fields can now be imported via/import endpoint.

4.1.0 release

This release supports the following feature/features:

  • Capability to limit the number of connections
    This release adds the capability to limit the number of connections to the source and target databases using the new API parameters as Max_concurrent_source_connection and Max_concurrent_target_connection under new source_configs and target_configs respectively. Using this property, you can fine-tune the number of connections as per source target infra to get better performance. For more information, refer to the Hyperscale Compliance API documentation.

  • Increased API object limit
    This release increases the API object limit from 1000 to 10000.

4.0.0 release

This release supports the following feature/features:

  • Hyperscale job sync
    This release adds the ability to:

  • Add configuration properties through .env file
    This release adds an additional capability to override commonly used configuration properties through the .env file. You can now update application properties in this file before starting the application. For more information, refer to the Configuration settings section.

3.0.0.1 release

3.0.0.1 is a patch release specifically aimed at addressing critical bugs. For more information, see Fixed issues .

3.0.0.0 release

This release supports the following feature/features:

  • Oracle connector
    This release includes the Oracle connector implemented as separate services, including unload and load services. These connector services enable Hyperscale Compliance for Oracle databases.

  • Parallel processing of tables
    This release processes all tables provided through the data-set API in parallel through the four operational stages - unload, masking, upload, and post-load to minimize the total time it takes to mask the complete data set.

  • Monitoring
    This release provides monitoring APIs so that you can track the progress of tables in your data set through the unload, masking, upload, and post-load phases. This API also provides a count of rows being processed through different stages.

  • Restartability
    This release includes the ability to restart a failed process.

  • Clean up
    This release supports cleaning data from previous job execution.

2.0.0.1 release

2.0.0.1 is a patch release specifically aimed at addressing critical bugs and has the following updates:

  • Upgraded spring boot version to 2.5.12.

  • Minor view-only changes in swagger-based API client.

2.0.0 release

2.0.0 is the initial release of Hyperscale Compliance. Hyperscale Compliance is an API-based interface that is designed to enhance the performance of masking large datasets. It allows you to achieve faster masking results using the existing Delphix Continuous Compliance offering without adding the complexity of configuring multiple jobs.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.