Solved

which is best object level or block size deduplication in commvault ?

Posted on 2015-02-24
2
340 Views
Last Modified: 2015-02-25
we are looking to implement a deduplication store in commvault. we can setup as object level or block level. which is best
0
Comment
Question by:andrew5499
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
2 Comments
 
LVL 21

Accepted Solution

by:
SelfGovern earned 500 total points
ID: 40630644
It depends on what your needs are.  "Which is better -- a sedan or a pickup truck?"

Block level deduplication will generally pick up more 'hits', and have a higher dedupe ratio (i.e., enable you to store more backups in a similar amount of space).  On the other hand, that chunking and hashing has processor overhead, so it may slow your backups (or other applications, if your backup server is not a dedicated system).

Object Level looks at, well, objects -- a file, for instance.  It's often much easier to tell if a file has changed than to do the chunking and hashing of block level dedupe.  Simplistically: Has the file size changed?  Yes.  OK, we have to store this new file.   But think about all the times you change only minor things in a file -- you might change only the title slide in a 10MB presentation to put a new date or customer -- yet under object dedupe, the whole file might need to be stored again.   Or you add a line to a spreadsheet.   Or you have several copies of a VMDK, each with only minor modifications, yet at a file object level, each is different and needs to be stored in its entirety, whereas under block dedupe, only the one copy of the whole file, plus the differences from it in other files are saved.   But while less space-efficient, object dedupe is likely to be much easier on your CPU, if that's an issue.

For what it's worth, most of the dedupe appliances like HP's StoreOnce and EMC's Data Domain perform block-level dedupe, and with much better granularity (HP is 4KB average block; EMC is 8KB).  Because the dedupe is offloaded from the backup server, the appliances can achieve both great throughput and much better dedupe compaction.
0
 

Author Closing Comment

by:andrew5499
ID: 40630724
We have a dedicated server, so Looks like Block size will win this duel.
0

Featured Post

Back Up Your Microsoft Windows Server®

Back up all your Microsoft Windows Server – on-premises, in remote locations, in private and hybrid clouds. Your entire Windows Server will be backed up in one easy step with patented, block-level disk imaging. We achieve RTOs (recovery time objectives) as low as 15 seconds.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Finding original email is quite difficult due to their duplicates. From this article, you will come to know why multiple duplicates of same emails appear and how to delete duplicate emails from Outlook securely and instantly while vital emails remai…
Microsoft will be releasing the Windows 10 Creators Update in just a matter of weeks. Are you prepared? Follow these steps to ensure everything goes smoothly and you don't lose valuable data on your PC.
This tutorial will walk an individual through the steps necessary to enable the VMware\Hyper-V licensed feature of Backup Exec 2012. In addition, how to add a VMware server and configure a backup job. The first step is to acquire the necessary licen…
This tutorial will show how to configure a new Backup Exec 2012 server and move an existing database to that server with the use of the BEUtility. Install Backup Exec 2012 on the new server and apply all of the latest hotfixes and service packs. The…

732 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question