Add Cloud Data Store
To create the destination, the recovery point server needs data stores. The data store specifies where the backup data is stored. You can add multiple cloud data stores to a recovery point server.
This section provides information about how to add a cloud data store as a destination.
Notes:
- The Add Cloud Data Store feature is available for Recovery Point Server only if you have added the cloud account using the same site that you have used while adding the Recovery Point Server.
- Creating a Cloud data store is not supported on a GUID or on a volume mounted without a drive letter.
- If the status of a data store shows Abnormal Blockall or Abnormal Restoreall after you create or import a data store, then refresh data store’s status from the Actions drop-down list, and check whether it changes to Running. If the status still doesn't update to Running, contact Arcserve Support.
- Cloud Data Store is supported in the following Cloud Platforms:
- Arcserve Cloud Storage
- AWS
- Google Cloud Platform
- Microsoft Azure Blob Storage
- Wasabi
Follow these steps:
- On the Protect screen, do one of the following:
- Navigate to Destinations > Recovery Point Servers.
- From the list of recovery point servers, click the Action drop-down list of a specific recovery point server, and then select Add Cloud Data Store.
- The Add Cloud Data Store screen appears with a name of the specified recovery point server.
- Navigate to Destinations > Data Stores.
- On the Data Stores page, click Add Data Store, and then select Add Cloud Data Store.
- The Add Cloud Data Store screen appears.
- Specify the following fields, and then click Save:
- Recovery Point Server: From the drop-down list, select the RPS as needed.
- Note: If you have chosen the Recovery Point Server view to add a cloud data store, or you have a single Recovery Point Server, the Recovery Point Server is selected by default.
- Data Store Name: Specifies the name of the data store.
- Concurrent Active Sources Limit to: Specifies the maximum concurrent jobs on the data store.
- Default Value: 4
- Data Store folder: Refers to the location where your data will be saved. Type the local folder path or click Browse to select the desired local folder.
- Cloud Account: Select the cloud account from the drop-down list as needed.
- Note: If you have added a single cloud account to the Console, The cloud account is selected by default.
- Container Name: Refers to the name of a container that you want to use as a Cloud destination. A storage account contains an unlimited number of containers, and a container can store an unlimited number of blobs and organizes a set of blobs in a file system. While creating a container name, make sure to meet the naming conventions. For more information, see Naming and Referencing Containers, Blobs, and Metadata.
- Note: The Container Name field appears only for the Microsoft Azure Blob cloud account.
- Region: Select the region of the cloud account as needed.
- Bucket: Refers to the bucket name that you want to use as Cloud destination. All files and folders moved or copied to the cloud vendor are stored and organized in your buckets (or containers). Buckets are like a container for your files and are used to group and organize objects together. Every object stored at the cloud vendor is placed in a bucket.
- JSON Key File: Click the arrow and browse to the location where the JSON key file is saved. Select the JSON file, and then click Select.
- Note: The JSON Key File field appears only when you select the cloud region for the Google Cloud Platform cloud service.
- Endpoint: Refers to the vendor service URL. This field gets auto-filled and grayed out for the Google Cloud Platform cloud service.
- Note: The Endpoint field appears only when you select the Other option as the cloud region for the Google Cloud Platform and Wasabi cloud services.
- Enable Deduplication: Specifies that deduplication is enabled for this data store. Arcserve Cloud Console supports both types of deduplication: Source-side deduplication and Global deduplication. Source-side deduplication prevents duplicate data blocks to move across network from a particular agent. Global deduplication eliminates duplicate data across all client machines based on the volume cluster level. By default, the Enable Deduplication check box is selected.
- Deduplication Block Size: By default, the deduplication block size is selected as 64 KB.
- Hash Memory Allocation: Type the amount of physical memory that you want to allocate to keep hashes. This field is pre-filled with a default value. The default value is based on the following calculation:
- If the physical memory of the RPS is smaller than 4 GB (or is identical to 4 GB), the default value of Hash Memory Allocation is identical to the physical memory of the RPS.
- If the physical memory of the RPS is greater than 4 GB, Arcserve Cloud Console calculates the available free memory at this time. Assume that the available free memory is X GB at present. Arcserve Cloud Console further checks the following conditions:
- If (X * 80%) > = 4 GB, the default value of Hash Memory Allocation is (X * 80%).
- If (X * 80%) < 4 GB, the default value of Hash Memory Allocation is 4 GB.
- Example: Consider the RPS has 32 GB of physical memory. Assume that operating system and other applications use 4 GB memory while creating the data store. So, the available free memory at this time is 28 GB. Then, the default value of Hash Memory Allocation is 22.4 GB (22.4 GB = 28 GB * 80%).
- Hash destination is on a Solid State Drive (SSD: Specifies that the hash folder is on a solid state drive.
- Note: Configure the hash destination on local SSD if the Hash destination is on a Solid State Drive(SSD) check box is selected.
- Index Destination: Defines the index destination folder to store the index files. Select a different disk to improve the deduplication processing.
- Note: The Index Destination path must be an empty folder.
- Hash Destination: Defines the path to store the hash database. Arcserve Cloud Console uses the SHA1 algorithm to generate the hash for source data. The hash database manages the hash values. Selecting a high speed Solid State Drive (SSD) increases the deduplication capacity and requires a lower memory allocation. For better hash performance, we recommend you that format the SSD volume as NTFS file system with 4 KB volume cluster size.
- Note: The Hash Destination path must be an empty folder.
- Notes:
- You cannot specify the same path for the following folders: Data Store folder, Index Destination, and Hash Destination.
- Remote or network share paths are not supported for cloud data stores.
- Enable Compression: Specifies that the data compression settings are enabled.
- Compression Type: Specifies whether to use the Standard or Maximum as the compression type.
- Compression is often selected to decrease the usage of the disk space, but also has an inverse impact on your backup speed due to the increased CPU usage. Based on your requirement, you can select one of the three available options.
- Enable Encryption: Specifies that encryption settings are enabled. When you select this option, you must specify and confirm the encryption password.
- Data encryption is the translation of data into a form that is unintelligible without a deciphering mechanism. The Arcserve UDP solution uses secure, AES (Advanced Encryption Standard) encryption algorithms to achieve maximum security and privacy of your data. For data stores, encryption or No encryption is supported. For Encryption, only AES-256 is available.
- A password is not required when you are attempting to restore to the computer from which the backup was performed. However, when you attempt to restore to a different computer, a password is required which is set in the policy and not the password set in the data store.
- Note: After the data store is created, you cannot enable or disable deduplication, compression, or encryption.
- The cloud data store is added successfully.

