Add a Data Store

To create the destination, the recovery point server needs data stores. The data store specifies where the backup data is stored. You can add multiple data stores to a RPS.

Follow these steps:

  1. Click the resources tab.
  2. From the left pane navigate to Destinations, and click Recovery Point Servers.
  3. The Destinations: Recovery Point Servers page is displayed.
  4. Perform one of the following actions:
  5. A list of options is displayed.

  6. Click Add a Data Store.
  7. The Create a Data Store page is displayed with the name of the specified recovery point server.
  8. Specify the following fields and click Save.
  9. Recovery Point Server

    Defines the recovery point server where the data store is created. The recovery point server is already added by default.

    Data Store Name

    Defines the name of the data store.

    Data Store Folder

    Defines the location of the folder where the data store is created. Click Browse to select the destination folder.

    Note: For non-deduplication and deduplication data store, the destination path should be an empty folder.

    Concurrent Active Nodes Limit to

    Specifies the maximum concurrent jobs on the data store.

    Default Value: 4

    Refers to a value from 1 to 9999. The value indicates the number of jobs that can concurrently run. If the running jobs meet the number, another job is placed in to the queue and job can only start when one of the running job completes. The completed job could mean a finished, canceled, or a failed job.

    The number applies to the Job Types but not to the Server nodes. For example, number 5 indicates that five backup jobs are running. Any job scheduled after five backup jobs waits in the queue, but you can submit another job such as File System Catalog.

    If the value is more than 16 or 32, messages are displayed to warn about the increased demand on hardware.

    Note: Limit to number only impacts the replication outbound job, not the replication inbound job. Limit to number does not impact the Restore or BMR jobs. Such jobs are not placed in a queue.

    Enable Deduplication

    Specifies that deduplication is enabled for this data store. Arcserve UDP supports both types of deduplication: Source-side deduplication and Global deduplication. Source-side deduplication prevents duplicate data blocks to move across network from a particular agent. Global deduplication eliminates duplicate data across all client machines based on the volume cluster level.

    Deduplication Block Size

    Defines the deduplication block size. The options are 4 KB, 8 KB, 16 KB, and 32 KB. The deduplication block size also impacts the Deduplication capacity estimation. For example, if you change the default 16 KB to 32 KB, the Deduplication capacity estimations double. Increasing the deduplication block size can decrease the deduplication percentage.

    Hash Memory Allocation

    Specifies the amount of physical memory that you allocate to keep hashes. This field is pre-filled with a default value. The default value is based on the following calculation:

    If the physical memory of the RPS is smaller than 4 GB (or is identical to 4 GB), the default value of Hash Memory Allocation is identical to the physical memory of the RPS.

    If the physical memory of the RPS is greater than 4 GB, Arcserve UDP calculates the available free memory at this time. Assume that the available free memory is X GB at present. Arcserve UDP further checks the following conditions:

    Example: Consider the RPS has 32 GB of physical memory. Assume that operating system and other applications use 4 GB memory while creating the data store. So, the available free memory at this time is 28 GB. Then, the default value of Hash Memory Allocation is 22.4 GB (22.4 GB = 28 GB * 80%).

    Hash Destination is on a Solid State Drive (SSD)

    Specifies if the hash folder is on a solid state drive.

    Note: Configure the hash destination on local SSD, if the Hash destination is on a Solid State Drive(SSD) option is enabled

    Data Destination

    Defines the data destination folder to save the actual unique data blocks. Use the largest disk to store data as that contains the original data blocks of source.

    Note: The Data Destination path should be a blank folder.

    Index Destination

    Defines the index destination folder to store the index files. Choose a different disk to improve the deduplication processing.

    Note: The Index Destination path should be a blank folder.

    Hash Destination

    Defines the path to store the hash database. Arcserve UDP uses the SHA1 algorithm to generate the hash for source data. The hash values are managed by the hash database. Selecting a high speed Solid State Drive (SSD) increases the deduplication capacity and requires a lower memory allocation. For better hash performance, we recommend to format the SSD volume as NTFS file system with 4KB volume cluster size.

    Note: The Hash Destination path should be an empty folder.

    Note: You cannot specify the same path for the following four folders: Data Store folder, Data Destination, Index Destination, and Hash Destination.

    Enable Compression

    Specifies that the data compression settings are enabled.

    Compression Type

    Specifies whether to use the standard or maximum compression type.

    Compression is often selected to decrease the usage of the disk space, but also has an inverse impact on your backup speed due to the increased CPU usage. Based on your requirement, you can select one of the three available options.

    Note: For more information, see Compression Type.

    Enable Encryption

    Specifies that encryption settings are enabled. When you select this option, you must specify and confirm the encryption password.

    The data store is created and gets displayed on the center pane. Click the data store to view the details in the right pane.