[Last Call] Learn about multicloud storage options and how to improve your company's cloud strategy. Register Now

x
?
Solved

which is best object level or block size deduplication in commvault ?

Posted on 2015-02-24
2
Medium Priority
?
399 Views
Last Modified: 2015-02-25
we are looking to implement a deduplication store in commvault. we can setup as object level or block level. which is best
0
Comment
Question by:andrew5499
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
2 Comments
 
LVL 21

Accepted Solution

by:
SelfGovern earned 2000 total points
ID: 40630644
It depends on what your needs are.  "Which is better -- a sedan or a pickup truck?"

Block level deduplication will generally pick up more 'hits', and have a higher dedupe ratio (i.e., enable you to store more backups in a similar amount of space).  On the other hand, that chunking and hashing has processor overhead, so it may slow your backups (or other applications, if your backup server is not a dedicated system).

Object Level looks at, well, objects -- a file, for instance.  It's often much easier to tell if a file has changed than to do the chunking and hashing of block level dedupe.  Simplistically: Has the file size changed?  Yes.  OK, we have to store this new file.   But think about all the times you change only minor things in a file -- you might change only the title slide in a 10MB presentation to put a new date or customer -- yet under object dedupe, the whole file might need to be stored again.   Or you add a line to a spreadsheet.   Or you have several copies of a VMDK, each with only minor modifications, yet at a file object level, each is different and needs to be stored in its entirety, whereas under block dedupe, only the one copy of the whole file, plus the differences from it in other files are saved.   But while less space-efficient, object dedupe is likely to be much easier on your CPU, if that's an issue.

For what it's worth, most of the dedupe appliances like HP's StoreOnce and EMC's Data Domain perform block-level dedupe, and with much better granularity (HP is 4KB average block; EMC is 8KB).  Because the dedupe is offloaded from the backup server, the appliances can achieve both great throughput and much better dedupe compaction.
0
 

Author Closing Comment

by:andrew5499
ID: 40630724
We have a dedicated server, so Looks like Block size will win this duel.
0

Featured Post

[Webinar] Lessons on Recovering from Petya

Skyport is working hard to help customers recover from recent attacks, like the Petya worm. This work has brought to light some important lessons. New malware attacks like this can take down your entire environment. Learn from others mistakes on how to prevent Petya like worms.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Microservice architecture adoption brings many advantages, but can add intricacy. Selecting the right orchestration tool is most important for business specific needs.
Is your phone running out of space to hold pictures?  This article will show you quick tips on how to solve this problem.
To efficiently enable the rotation of USB drives for backups, storage pools need to be created. This way no matter which USB drive is installed, the backups will successfully write without any administrative intervention. Multiple USB devices need t…
This tutorial will walk an individual through the steps necessary to install and configure the Windows Server Backup Utility. Directly connect an external storage device such as a USB drive, or CD\DVD burner: If the device is a USB drive, ensure i…

650 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question