Data Source Support
Oracle Connector
Oracle Database (commonly referred to as Oracle RDBMS or simply as Oracle) is a multi-model database management system produced and marketed by Oracle Corporation. The following table lists the versions that have been tested in the lab setup:
Platforms | Version |
---|---|
Linux |
|
User on source database must select privileges
User on target database side must have all privileges and SELECT_CATALOG_ROLE.
Supported Data Types
The following are the different data types that are tested in our lab setup:
VARCHAR
VARCHAR2
NUMBER
FLOAT
DATE
TIMESTAMP(default)
CLOB
BLOB(with text)
Hyperscale Compliance restricts the support of the following special characters for a database column name: ~!@#$%^&*()\\\"?:;,/\\\\`+=[]{}|<>'-.\")]
Property values
Property | Value |
---|---|
|
|
|
|
For default values, see Configuration settings .
MS SQL Connector
Supported versions
Microsoft SQL Server 2019
Supported data types
The following are the different data types that are tested in our lab setup:
VARCHAR
CHAR
DATETIME
INT
TEXT
XML (only unload/load))
VARBINARY (only unload/load)
SMALLINT
SMALLMONEY
MONEY
BIGINT
NVARCHAR
TINYINT
NUMERIC(X,Y)
DECIMAL(X,Y)
FLOAT
NCHAR
BIT
NTEXT
MONEY
Property Values
Property | Value |
---|---|
|
|
|
|
For default values, see Configuration settings .
Known Limitations
If the applied algorithm's produced mask data exceeds the corresponding target table columns datatype's max value range, then job execution will fail in load service.
Schemas, tables, and column names having special characters are not supported.
Masking of columns with
VARBINARY
datatype is not supported.Hyperscale Compliance can mask up to a maximum 1000 tables in a single job.
Delimited Files Connector
The connector can be used to mask large delimited files. The delimited unload service splits the large files into smaller chunks and passes them onto the masking service. After the masking is completed, the files are sent to the load service which joins back the split files (the end user also has a choice to disable the join operation).
Pre-requisites
The source and target (NFS) locations have to be mounted onto the docker containers of unload and load service. Please note that the locations on the containers are what needs to be used when creating the connector-info’s using the controller.
CODE# As an example unload-service: image: delphix-delimited-unload-service-app:${VERSION} ... volumes: ... - /path/to/nfs/mounted/source1/files:/mnt/source1 - /path/to/nfs/mounted/source2/files:/mnt/source2 ... load-service: image: delphix-delimited-load-service-app:${VERSION} ... volumes: ... - /path/to/nfs/mounted/target1/files:/mnt/target1 - /path/to/nfs/mounted/target2/files:/mnt/target2
Property values
Property | Value |
| unique_source_files_identifier |
| false |
For default values, see Configuration settings.
Supported data types
The following are the different data types that are tested in our lab setup:
String/Text
Double
Columns with values such as
36377974237282886994505
get converted to type double with value3.637e22
, inherently by PyArrow. In order to not have any loss of data, the Delimited Files Connector converts all double interpretations as strings.In this case,
36377974237282886994505
will be converted to "36377974237282886994505
".
Int64
Columns with values such as
00009435304391722556805
get inferred as type Int64, but the conversion fails for the above value.In order to mitigate the same, all Int64 types are converted to type String. In this case,
00009435304391722556805
will be converted to "00009435304391722556805
".
Timestamp
Known Limitations
The backend technology used to perform the split and join operations is PyArrow, which comes with certain limitations:
It supports only single-character delimiter.
The end-of-record character can only be
`\n`
,`\r`
, or`\r\n`
.
The output files will have all string types quoted with double quotes (`”`) only.
MongoDB Connector
The connector can be used to mask large MongoDB files. The Mongo unload service splits the large collections into smaller chunks and passes them onto the masking service. After the masking is completed, the files are sent to the Mongo load service, which imports the masked files into the target collection.
Supported Versions
Platforms | Version |
---|---|
Linux | MongoDB 4.4.x MongoDB 5.0.x MongoDB 6.0.x |
Pre-requisites
MongoDB user should have the following privileges:
CODEuse admin db.createUser({user:"backupadmin", pwd:"xxxxxx", roles:[{role:"backup", db: "admin"}]})
Mongo Unload and Mongo Load service image names are to be used under unload-service and load-service. The NFS location has to be mounted onto the Docker containers for unload and load services. Example for mounting
/mnt/hyperscale
.CODE# As an example docker-compose.yaml unload-service: image: delphix-mongo-unload-service-app:${VERSION} volumes: # Uncomment below lines to mount respective paths. - /mnt/hyperscale:/etc/hyperscale load-service: image: delphix-mongo-load-service-app:${VERSION} volumes: # Uncomment below lines to mount respective paths. - /mnt/hyperscale:/etc/hyperscale
Uncomment the below lines from
docker-compose.yaml
file undercontroller > environment
:
# uncomment below for MongoDB connector
#- SOURCE_KEY_FIELD_NAMES=database_name,collection_name
#- VALIDATE_UNLOAD_ROW_COUNT_FOR_STATUS=${VALIDATE_UNLOAD_ROW_COUNT_FOR_STATUS:-false}
#- VALIDATE_MASKED_ROW_COUNT_FOR_STATUS=${VALIDATE_MASKED_ROW_COUNT_FOR_STATUS:-false}
#- VALIDATE_LOAD_ROW_COUNT_FOR_STATUS=${VALIDATE_LOAD_ROW_COUNT_FOR_STATUS:-false}
#- DISPLAY_BYTES_INFO_IN_STATUS=${DISPLAY_BYTES_INFO_IN_STATUS:-true}
#- DISPLAY_ROW_COUNT_IN_STATUS=${DISPLAY_ROW_COUNT_IN_STATUS:-false}
Set the value of
LOAD_SERVICE_REQUIRE_POST_LOAD=false
inside the “.env
”CODE# Set LOAD_SERVICE_REQUIRE_POST_LOAD=false for MongoDB Connector LOAD_SERVICE_REQUIRE_POST_LOAD=false
Uncomment the below lines from “
.env
” file.CODE# Uncomment below for MongoDB Connector #VALIDATE_UNLOAD_ROW_COUNT_FOR_STATUS=false #VALIDATE_MASKED_ROW_COUNT_FOR_STATUS=false #VALIDATE_LOAD_ROW_COUNT_FOR_STATUS=false #DISPLAY_BYTES_INFO_IN_STATUS=true #DISPLAY_ROW_COUNT_IN_STATUS=false
Property values
Mandatory changes required for the MongoDB Connector in the “docker-compose.yaml
” and “.env
” files:
Property | Value |
---|---|
SOURCE_KEY_FIELD_NAMES | database_name,collection_name |
LOAD_SERVICE_REQUIRE_POST_LOAD | false |
VALIDATE_UNLOAD_ROW_COUNT_FOR_STATUS | false |
VALIDATE_MASKED_ROW_COUNT_FOR_STATUS | false |
VALIDATE_LOAD_ROW_COUNT_FOR_STATUS | false |
DISPLAY_BYTES_INFO_IN_STATUS | true |
DISPLAY_ROW_COUNT_IN_STATUS | false |
For default values, see Configuration settings.
Known Limitation:
Sharded MongoDB Atlas is not supported.
In-Place Masking is not supported.