Data Deduplication Best Practices for Tivoli Storage Manager V6.2


If you use the data deduplication feature of Tivoli Storage Manager, I would recommend you to read the following article, very helpful!

Original Link: http://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Data+deduplication+best+practices+for+Tivoli+Storage+Manager+V6.2 

Introduction

Data deduplication technology that was introduced in Tivoli Storage Manager V6.1 and further enhanced with source-side (client) deduplication in Tivoli Storage Manager V6.2 provides the following two primary benefits:

  • Reduction in the storage capacity that is required for FILE-type storage pools on the TSM server (both server-side and client-side deduplication)
  • Reduced network traffic between the TSM client and server (client-side deduplication only)

The data deduplication processing can drive significantly more load on system resources, and can also have an adverse impact on Tivoli Storage Manager reliability if data deduplication is not implemented following several best practices. The upper limit on the size of objects stored in deduplicated storage pools is one primary consideration, but in general, a TSM server using deduplication must be allocated more resources. The purpose of this document is to explain the best practices and provide general guidance for the use of TSM data deduplication technologies.

Before continuing to read this document, ensure that you are familiar with the information in the following documents:

Is data deduplication right for you?

First decide whether you should use data deduplication, then consider the type to use.

Should you use data deduplication?

The most important factor in this decision is the final storage pool destination for data on your Tivoli Storage Manager server. Data remains deduplicated on the server only as long as it exists in a FILE-type storage pool that has been enabled for deduplication. Because of the significant processing that is required to identify duplicate data, as well as the impact to downstream data management tasks on the server, deduplication should probably be avoided in cases where data movement processes such as migration eventually move the data to a tape storage pool. Movement of the data to a tape storage pool or to any other storage pool without deduplication enabled forces reconstruction of the deduplicated objects. 

The following three use cases for Tivoli Storage Manager data deduplication are typical:

  • You have a FILE storage pool that is large enough to hold data for its remaining lifetime. This storage pool can either be a direct target for backups, or a secondary storage pool where selected data is moved.
  • You have a FILE storage pool that holds data for an extended period of time. The period of time is long enough to make the space savings of deduplication worth the cost of deduplication. The retention of data in the storage pool is maintained through the use of age-based migration, controlled with the MIGDELAY storage pool parameter.
  • You have an active-data FILE storage pool.
Should you use client deduplication, server deduplication, or both?

Review the document Data deduplication in Tivoli Storage Manager V6.2 and V6.1 for details about advantages and disadvantages of the deduplication provided by the Tivoli Storage Manager server and client, or by a deduplication appliance.

Typically a combination of both client-side and server-side data deduplication is the most appropriate.  Here are some further points to consider:

  • Server-side deduplication is a two-step process of duplicate data identification followed by reclamation to remove the duplicate data.  Client-side deduplication stores the data directly in a deduplicated format, reducing the need for extra reclamation processing.
  • The deduplication processing in client-side deduplication can increase backup durations. Expect increased backup durations if network bandwidth is not restrictive. A 50% increase in backup durations is a reasonable estimate when using client-side deduplication in an environment that is not constrained by the network.
  • Client-side deduplication can place a more significant load on the server because of the large number of clients that can simultaneously drive deduplication processing. Server-side deduplication, on the other hand, typically has a relatively small number of identification processes running in a controlled fashion. This topic is discussed further in later sections.
  • Large objects that are processed with TSM data deduplication introduce potential problems for the server because of the way that the large objects are managed in a single database transaction, the large number of file chunks for which metadata must be stored in the server database, and the potential of deadlocks resulting from the long duration for which locks on related objects in the server database are held. From the perspective of TSM deduplication, large objects are considered to be any object greater than 100 GB.

Recommendations:

  1. Plan to use a combination of client-side and server-side data deduplication.
  2. Enable client-side data deduplication for clients that meet the following criteria:
    • Increased backup durations can be tolerated.
    • The amount of data sent by the client is effectively reduced by deduplication.
    • The client does not typically send large files, or client configuration options can be used to break up large objects into smaller objects. This topic is further explained in a following section.
  3. Follow the best practices described in the remainder of this document.
Best practices when using Tivoli Storage Manager deduplication

The use of Tivoli Storage Manager data deduplication requires significantly more resources on the TSM server and clients.  Implementing the best practices given in the following sections is critical to help avoid problems including but not limited to:

  • Server outages caused by running out of active log space or archive log space.
  • Server outages or client backup failures caused by exceeding the DB2 internal lock list limit.
  • Server data management process failures and hangs.
Practice 1: Properly size the Tivoli Storage Manager server database, recovery log, and system memory

The use of data deduplication drives the consumption of more server database space as a result of storing the metadata related to duplicate chunks. Data deduplication also tends to drive longer-running transactions and a related larger peak in recovery log usage.  In addition, more system memory is required for optimal performance through caching during duplicate chunk look-up for both server-side and client-side deduplication.

The following technote gives guidance on how to size the server recovery log, including information specific to implementations using Tivoli Storage Manager deduplication.
Tivoli Storage Manager V6.1 or V6.2 server might crash if log is not sized appropriately

The minimum and recommended system requirements for servers that manage deduplicated storage pools are available in the detailed system requirements technotes. For links, see the technote Overview of system requirements and click the operating system for your server.

Recommendations:

  • Ensure that the Tivoli Storage Manager server has a minimum of 16 GB of system memory.
  • Allocate a file system with two to three times more capacity for the server database than you would normally allocate for a server not using deduplication.
  • Configure the server to have the maximum possible recovery log size of 128 GB by setting the ACTIVELOGSIZE server option to a value of 131072.
  • Use a directory for the database archive logs with an initial free capacity of at least 300 GB. The directory is specified using the server option, ARCHLOGDIR.
Practice 2: Avoid overlapping server maintenance tasks with client backup windows

Separating client backups into an isolated time period during which server maintenance tasks are not running is known as creating a backup window. This practice is recommended regardless of whether deduplication is used. However, this practice is critical when using deduplication.

In addition, the server maintenance tasks should be performed in a sequence that avoids contention between the different types of processing. The server maintenance tasks are listed below, with the first three being the most likely to interfere with the success of client backups:

  • Migration
  • Reclamation
  • Duplicate identification
  • Storage pool backup
  • Expiration
  • Database backup

The following technote provides details on how to switch to scheduled data maintenance tasks to establish an isolated backup window, and perform the maintenance tasks in an optimal sequence.
Avoiding contention for server resources during client backup or archive

Recommendations:

  • Schedule client backups in a backup window isolated from all data maintenance processes. If this is not possible, isolate the backup window from migration, reclamation, and duplicate identification processing.
  • Schedule each type of data maintenance task with controlled start times and durations so that they do not overlap with each other. Preventing overlap is most critical for migration, reclamation, duplicate identification, and storage-pool backup processes.
  • Schedule storage-pool backup operations before duplicate identification processing to avoid the need to reconstruct objects being sent to a non-deduplicated copy storage pool. This only applies to data that is not stored with client-side deduplication.
Practice 3: Modify DB2 locklist management

Large-scale implementation of Tivoli Storage Manager data deduplication, particularly if the implementation handles large files or large numbers of moderately large files concurrently, can cause the automatically managed lock list storage of DB2 to be insufficient. When the lock list storage is insufficient, the result can be backup failures, data management process failures, or complete server outages.

File sizes greater than 500 GB processed through deduplication are most likely to cause problems with the lock list management. However, a large number of backups using client-side deduplication can also cause this problem even with smaller size files.

The following activities can drive up the DB2 lock list usage:
o    Client backups using client-side deduplication
o    Data movement within a deduplicated storage pool (reclamation and MOVE DATA commands)
o    Data movement out of a deduplicated storage pool (migration and MOVE DATA commands)

The following technote explains how to estimate your peak volume of in-flight deduplication transactions and the corresponding lock list requirements for handling this volume, and how to change the DB2 limit if necessary.
Managing the DB2 LOCKLIST Configuration Parameter with Tivoli Storage Manager

Recommendation: When you estimate the lock list storage requirements, follow the best practice described in the technote to allow for much larger than expected loads. Having more lock list storage than you think you need can prevent failures from unexpectedly high loads on the server.

Practice 4: Limit the impact of large objects on deduplication processing

Use the controls that are available to limit the potential impact of large-object deduplication on the TSM server. Each of the following types of controls are discussed in detail:

  • Server controls that limit the size of objects processed by deduplication.
  • Controls on Tivoli Storage Manager data management processes that limit the number of processes that can operate concurrently on the server.
  • Scheduling options that control how many clients run scheduled backups simultaneously, which can be used to limit the number of clients performing client-side deduplication at the same time.
  • Client controls that allow larger objects to be processed as a collection of smaller objects. These controls are primarily related to Tivoli Storage Manager Data Protection products.
Server controls that limit the size of objects processed by deduplication

Three controls are available on a Tivoli Storage Manager server to prevent large objects from being processed by deduplication.

The first control is the storage pool MAXSIZE parameter, which can be used to prevent large objects from ever being stored in a deduplicated storage pool. Either leave this parameter at the default value of NOLIMIT, or set the MAXSIZE value to a value larger than the larger of the two options, CLIENTDEDUPTXNLIMIT and SERVERDEDUPTXNLIMIT.  The use of MAXSIZE value with a deduplicated storage pool allows a choice between these two behaviors:

  • Allow objects larger than the deduplication limits to be stored in a deduplicated storage pool even though they will never be deduplicated.
  • Prevent objects that are too large to be eligible for deduplication from being stored in a deduplicated storage pool, and redirect them to the next storage pool in the storage pool hierarchy.

The second control is the server SERVERDEDUPTXNLIMIT option, which limits the total size of objects that can be deduplicated in a single transaction by duplicate identification processes. This option essentially limits the maximum file size that is processed through server-side deduplication. The default value for this option is 300 GB, and the maximum value is 750 GB. Because less simultaneous activity is typical with server-side deduplication, you can consider allowing a larger limit on object size for server-side deduplication than client-side deduplication.

The final control is the server CLIENTDEDUPTXNLIMIT option, which restricts the total size of all objects that can be deduplicated in a single client transaction.  This option also essentially limits the maximum object size that is processed through client-side deduplication. However, there are some methods to break up larger objects into smaller objects. The default value for this option is 300 GB, and the maximum value is 750 GB.

Recommendations:

  • Set the MAXSIZE parameter for deduplicated storage pools to a value slightly larger than the larger of CLIENTDEDUPTXNLIMIT and SERVERDEDUPTXNLIMIT.
  • If you increase either CLIENTDEDUPTXNLIMIT or SERVERDEDUPTXNLIMIT option values beyond the defaults, you might need to re-evaluate your sizing for the server recovery log and the DB2 locklist.
  • If you are planning to run many simultaneous client backups using client-side deduplication and you do not require the deduplication of larger objects, consider lowering the setting of the CLIENTDEDUPTXNLIMIT option to the minimum setting of 50 GB.
Controls on Tivoli Storage Manager data management processes

The following controls on data management processes can be used to limit how many large objects are simultaneously processed by the server.

  • Storage pool parameters on the DEFINE or UPDATE STGPOOL command:
    • The MIGPROCESS parameter controls the number of migration processes for a specific storage pool.
    • The RECLAIMPROCESS  parameter controls the number of simultaneous processes used for reclamation.
  • IDENTIFY DUPLICATES command parameters: The IDENTIFYPROCESS parameter specified with the IDENTIFY DUPLICATES command controls how many duplicate identification processes can run at one time for a specific storage pool.

Recommendations:

  • You can safely run duplicate identification processes for more than one deduplicated storage pool at the same time. However, specify the IDENTIFYPROCESS parameter with the IDENTIFY DUPLICATES command to limit the total number of all simultaneous duplicate identification processes to a number less than or equal to the number of processors available in the system.
  • Follow the recommendations given in Practice 2 for not overlapping different types of operations, such as reclamation and migration.
  • Do not have more than six reclamation or migration processes operating at one time against a deduplicated storage pool. Use the MIGPROCESS and RECLAIMPROCESS storage pool parameters to control this.
Schedule controls for client backups

For scheduled backups, you can limit the number of client backup sessions that perform client-side deduplication at the same time. Consider some or all of the following approaches:

  • Clients can be clustered in groups using different schedule definitions that run at different times during the backup window. Consider spreading clients that use client-side deduplication among these different groups.
  • Increase the duration for schedule startup windows and increase schedule start-time randomization to limit how many backups that use client-side deduplication start at the same time.
  • Separate client backup destinations through server policy definitions, such that different groups of clients use different storage pool destinations.
    • Clients for which data is never to be deduplicated should not use a management class that has as its destination a storage pool with deduplication enabled.
    • Clients that use client-side deduplication can use storage pools where they are matched with other clients for which there is a higher likelihood of duplicate matches. For example, all clients running Microsoft Windows operating systems can be set up to use a common storage pool, but they do not necessarily benefit from sharing a storage pool with clients that perform backups of Oracle databases.
Client controls to limit large object deduplication

A primary source of large objects being processed through client-side deduplication is backups by the Tivoli Storage Manager for Data Protection family of products. Many of these products process objects with sizes in the range of several hundred gigabytes to a terabyte, which exceeds the current maximum allowed object size for deduplication. The following approaches can be used to have the clients break these objects into multiple smaller objects that are within the object size limits for deduplication.

  • Method 1: Use Tivoli Storage Manager client features that back up application data using multiple streams. For example, a 1 TB database is not eligible for deduplication as a whole, but when backed up with four parallel streams, the resulting four 250 GB objects are eligible for deduplication. For Tivoli Storage Manager Data Protection for SQL, you can use the STRIPES option to break the backup into multiple streams.
  • Method 2: Use application controls that influence the maximum object size that is passed through to Tivoli Storage Manager. Tivoli Storage Manager Data Protection for Oracle has several RMAN configuration parameters that can cause larger databases to be broken into smaller parts. These include using multiple channels, specifying the MAXPIECESIZE option, or both.

Important: In some cases large objects cannot be reduced in size, and therefore cannot be processed by TSM deduplication:

  • Large files in a file system. The backup-archive clients always send large files in a single transaction, which cannot be broken apart into smaller transactions.
  • Image backups of a large file system are sent within a single transaction and cannot be broken into smaller components.

Advertisement

3 thoughts on “Data Deduplication Best Practices for Tivoli Storage Manager V6.2

  1. I agreed with Lyuthar, on this platform we find some valuable information which we will definitely worth to try.

    Thanks.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s