Solved

which is best object level or block size deduplication in commvault ?

Posted on 2015-02-24
2
309 Views
Last Modified: 2015-02-25
we are looking to implement a deduplication store in commvault. we can setup as object level or block level. which is best
0
Comment
Question by:andrew5499
2 Comments
 
LVL 20

Accepted Solution

by:
SelfGovern earned 500 total points
ID: 40630644
It depends on what your needs are.  "Which is better -- a sedan or a pickup truck?"

Block level deduplication will generally pick up more 'hits', and have a higher dedupe ratio (i.e., enable you to store more backups in a similar amount of space).  On the other hand, that chunking and hashing has processor overhead, so it may slow your backups (or other applications, if your backup server is not a dedicated system).

Object Level looks at, well, objects -- a file, for instance.  It's often much easier to tell if a file has changed than to do the chunking and hashing of block level dedupe.  Simplistically: Has the file size changed?  Yes.  OK, we have to store this new file.   But think about all the times you change only minor things in a file -- you might change only the title slide in a 10MB presentation to put a new date or customer -- yet under object dedupe, the whole file might need to be stored again.   Or you add a line to a spreadsheet.   Or you have several copies of a VMDK, each with only minor modifications, yet at a file object level, each is different and needs to be stored in its entirety, whereas under block dedupe, only the one copy of the whole file, plus the differences from it in other files are saved.   But while less space-efficient, object dedupe is likely to be much easier on your CPU, if that's an issue.

For what it's worth, most of the dedupe appliances like HP's StoreOnce and EMC's Data Domain perform block-level dedupe, and with much better granularity (HP is 4KB average block; EMC is 8KB).  Because the dedupe is offloaded from the backup server, the appliances can achieve both great throughput and much better dedupe compaction.
0
 

Author Closing Comment

by:andrew5499
ID: 40630724
We have a dedicated server, so Looks like Block size will win this duel.
0

Featured Post

Optimizing Cloud Backup for Low Bandwidth

With cloud storage prices going down a growing number of SMBs start to use it for backup storage. Unfortunately, business data volume rarely fits the average Internet speed. This article provides an overview of main Internet speed challenges and reveals backup best practices.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

The article will include the best Data Recovery Tools along with their Features, Capabilities, and their Download Links. Hope you’ll enjoy it and will choose the one as required by you.
Concerto Cloud Services, a provider of fully managed private, public and hybrid cloud solutions, announced today it was named to the 20 Coolest Cloud Infrastructure Vendors Of The 2017 Cloud  (http://www.concertocloud.com/about/in-the-news/2017/02/0…
This tutorial will walk an individual through the process of installing of Data Protection Manager on a server running Windows Server 2012 R2, including the prerequisites. Microsoft .Net 3.5 is required. To install this feature, go to Server Manager…
This tutorial will show how to configure a single USB drive with a separate folder for each day of the week. This will allow each of the backups to be kept separate preventing the previous day’s backup from being overwritten. The USB drive must be s…

777 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question