Storage Software





The term "Backup" means the methods and processes involved to copy computer data (system data as well as application data) to media other than the ones where the data originally live (disk, tape, optical, cloud). "Restore" in turn means the methods and processes involved in data recovery, i. e., bringing back copied computer data to their original location. Backup/Restore primarily serves as a means of protection against data loss, be it due to disaster, corruption or sabotage. It can also be used for recovering data from an earlier point in time and even for cloning machines or applications. There is a wide variety of backup/restore software available, from expensive commercial products to free or open source tools.

Share tech news, updates, or what's on your mind.

Sign up to Post

Dear Experts
We have recently implemented veeam backup and replication solution, would like to set the best practice backup and retention policy for VM’s and files. Please suggest me to daily backups best way to rotate  and similarly weekly and monthly how many days retention to be set and rotate. please help me with best practice way to configure like first full backup and then daily incremental and weekly and monthy how to set please suggest the best backup policy and set the retention
I google drive backup a safe backup solution?

Some time ago I tried to clone the main SSD disk and the only partition where I'd installed Windows 10 to another disk / partition using this tool :

So,I chosen as target the disk and the only partition that was there (it is solid state disk) and as a target the only existing partition on the destination disk and I started the process. After some time Windows 10 started to working bad,I saw that the icons on the desktop gone away and nothing worked anymore,so taken by the fear,I stopped the operation. I must not do that,really. All the informations stored on every disk and partition (USB and SATA) attached on my PC was gone. I still don't understand why it deleted all the partition tables of every disk NOT involved in the operation. I lost everything in every disk. Maybe that tool has some kind of serious bug. I don't know. Fact is that I found another case like mine.

After some day I tried to recover the data lost with this tool :

and I have seen that all the files that have been deleted by that cloning tool are still there,saved on the disks,with the same organization structure. Now,I think that the Miray hdclone basic edition removed the first track of the disks and their partition tables. I would like to know if …
We have a Datto Alto (Model Datto-1000) backup drive that was just replaced with a newer model because the hard drive was failing (SMART Errors).  I have searched online and can't find any information on replacing the hard drive with a new one.  

I would like to simply replace the internal hard drive and then use the system to perform local backups for other server's.  Has anyone seen this done or it my old Datto Alto basically a paperweight now?

I am trying to set up a file server cluster using 2 VMs hosted on an S2D2019 cluster. Both compute and storage takes place on the S2D cluster.

I have create 2 vms FSC1 and FSC2, i have then created the cluster called FSC and created a file server called FSS.

On both VMsi have attached a shared VHDS also hosted on the S2D cluster.

I have tested speeds on the VMs C:\ drive using diskspd.exe and i can get over 3GB read and 1GB write running at 75/25 r/w with a 64kb file.   no problems it works great. I have then tested copying large files from another location to the C drive and it gets around 700MB/s   again no issues with this.

I then go to the drive hosted on the VHDs on the server that is hosting the storage and try to run diskspd.exe again. I get the message

"WARNING: Could not set valid file size (error code: 87); trying a slower method of filling the file (this does not affect performance, just makes the test preparation longer)"

After a while it comes back saying no reads and no writes. If i try to copy a file to this drive i get 10MB/s trying to copy using the share path and 30MB/s trying to copy using the local path.

Is there anything that could cause such pathetic performance? Maybe something i have missed? I created the VHDS on FSC1 and then attached it to FSC2.

Thanks in advance for any help and just let me know if you need anymore information.
It's been a long time since I did this, but I need two drives to match content.  I use to use 2nd copy.  Is that still good?

What I have is a media drive with about 8TB of stuff on it and another drive that needs that same content.  But, that second drive already has "some" of the correct content, so I only want to move what's missing and not re-copy stuff already there.  

Sounds like a roboycopy job, but man, the syntax on that is rough.

Anyway, help an IT brother out and let me know what you think would work best.

We currently use Veeam and Veeam Copy Jobs to an Exagrid de-duplicating appliance.  Exagrid automatically sync's our data center backups to our remote office.  To supplement that, and to protect against a ransomware attack, we want cloud air-gapped backups.  We presently use Veeam copy jobs to iLand for that purpose but it's not going well.

We are about to demo CommVault but I figured before doing so, maybe I should take a step back and ask for thoughts on:

CommVault vs. Zert0

The ONLY thing I like about Veeam copy jobs to a cloud provider is the provider's "insider protection" which basically is a cloud based "recycle bin" which even a malicious admin can't touch.

We don't enjoy the overly complex nature of hundreds of  Veeam backup jobs and copy jobs.  It's a nightmare to monitor and maintain.

CommVault sounds much simpler.

I don't know anything about Zert0 other than it offers granular restores to the minute which could be super handy during a ransomware attack (assuming they don't successfully attack our backups).

I suppose the air-gapped-ness depends on the destination provider for both CommVault and Zert0.

Do these solutions typically rely on things like Amazon's Object Lock / Compliance Mode / WORM (write-once-read-many)?

A big requirement is MFA in order to delete backup containers; my nightmare scenario is my laptop getting hijacked and/or my admin credentials getting compromised and ransomware hacker attacking my cloud backups too !!!

HI there,
i just noticed on a SBS 2011 that the backups were failing with details "Volume shadow copy operation failed with error 0x80042306"

The eventlog shows volsnap Event ID 25
"The shadow copies of volume C: were deleted because the shadow copy storage could not grow in time.  Consider reducing the IO load on the system or choose a shadow copy storage volume that is not being shadow copied."

This started 2 months ago & the last completed backup is 12 June.

The Server is set up for a daily full backup at 20.00, the office closes at 18.00

All help is much appreciated.

I have a Synology DS918+ and a backup to a disk and to another Synology Disk Station (Hyper Backup). However, I noticed the backup to the other Disktation didn't happen for quite some time.
How can I make sure backup is done online, easies way and for the least price or even free (I'm backing up max. about 4 TB of which little changes).
Do I use Azure, Glacier, OneDrive, other?

Note: is there a way to detect ransomware (cryptolocker) in time?

I have an SBS2011 Server and have discovered the SYSVOL directory is corrupted (by cryptoware a while ago)
There are no backups to restore these files from so am considering rebuilding SYSVOL and its content.
I am considering some steps shown here:
to rebuild this but wasn't sure if this was right and what the repercussions might be.
I have 8 workstations on the SBS network
Exchange used to be hosted on the Server but have recently moved to O365.
Small MSP with <250 seats, looking to replace our present offering with something we can resell and whitelist. Small to Mid sized companies. 2-5 the norm, a few 20+ computers. Bare metal and data backup options. Client backups range from 100Gb to 5TB in size.
Can you give some feedback on a concern we have regarding backups/DR. I am not from an infrastructure background but from a risk perspective our company has 2 data centres one classed as the main office another several miles away at a secondary site, they use some form of vmWARE/vSphere technology for HA purposes so if they lose the primary infrastructure at the main office data centre their should be minimal downtime/impact in availability as the copy at the DR site should 'save the day' and took effect/over the running. At quite what level this is done I am unsure, e.g. VM/Host/Storage but will investigate further.

However we also discovered the admins write the daily data backups of VM's to some form of NAS storage device in the same core data centre (physically in the same room). So a major disaster would wipe out one of the 2 data centres and all of the backups? This to me sounds a dreadful design. The single source of backups in itself to me seems a an awful design, as even though the VMware technology may keep the servers/apps up and running if the primary site burnt down, until you backed up from those there is a huge data loss risk?

What is your view? Or in such a setup is there likely to be something compensating that we have completely overlooked and got lost in the technical information/and should dig a bit further. I am getting a little confused with the HA and backup perspective on such a setup, and what saves you from what occurrence. And what the overall…
can anyone give a beginners guide to the various components of sharepoint server that need to be included in backup schedules?
Dear Acronis experts,

Have you seen an activity log like the one below?

The problem is the backup will take longer than normal.

How to fix it?

Acronis Backup Activity Log
We are trying to install a newer version of Acronis backup software on a client's Windows 10 computer.  The owner and/or permissions on the registry key have changed to the point that the install is failing because the software could not write to the registry key.  We have tried to take ownership of the registry key but receiving messages about invalid permissions.  

We have tried using Regedit and PowerShell but have not been able gain control of the registry key.  We found information about the SubInACL tool that appears to have the ability to correct this issue but also read this tool does not work with Windows 10.

We are not sure how the registry key got changed but seeking suggestions on how to regain access to the registry key so the software can be installed.  Any suggestions will be appreciated.
My daily DB2 backups have gone from a couple of hours to 22 hours.  Here are the stats from my last backup.
Parallelism       = 5
Number of buffers = 10
Buffer size       = 3149824 (769 4kB pages)
                                                                               Compr     Retry       %   
BM#    Total      I/O      Compr     MsgQ      WaitQ      Buffers   MBytes    MBytes    MBytes     Retry 
---  --------  --------  --------  --------  --------    --------  --------  --------  --------  --------
000  82411.97   1657.43    296.71      0.00  78085.25        1899    174805     10502         5       0.1
001  82404.48   2133.06  13840.11      0.10      0.18      107275    380546    380340       347       0.1
002  82404.48    720.83   2260.55      0.00  72087.44        3334     57064     56920         5       0.0
003  82404.37    805.24   2539.45      0.00  69463.70        2247     77614     77537         0       0.0
004  82404.30   1864.22    285.66      0.00  77991.13        1657    178222      9018         4       0.1
---  --------  --------  --------  --------  --------    --------  --------  --------  --------  --------
TOT  412029.62   7180.80  19222.50      0.11  297627.73      116412    868252    534319       364       0.1

MC#    Total      I/O                MsgQ      WaitQ      Buffers   MBytes 
---  --------  --------            --------  --------    --------  --------
000  82411.81    294.56            82115.54      0.00      116413    349687

Open in new window

I'm in the middle of a project that requires me to collect the 'last accessed' file/folder on our file server to assist in archiving stale files and folders. However, when using a program called TreeSize Professional to generate this report I'm seeing that most if not all files are being accessed daily. I find this to be misleading and believe our backup software(Windows Backup Server and CloudBerry Backup for Windows) are modifying the properties every night during scheduled backups.

Does anyone know how I can go about stopping our backup software from modifying the Last Accessed properties?

We are replacing our virtual infrastructure  and  have procured Dell EMC ME4024 and PowerEdge R640 Server.
10x 1.8TB HDD 10K 512e SAS12 2.5
6x 1.92TB SSD SAS Read Intensive

We have hired an implementation engineer to setup this Storage.

Is it possible to set Tier at LUN levels ?

If this is possible, will it be a good idea to set up the First LUN  as SSD Tier , so that I can run SQL server application on it ( High performance application)
My SQL server VM size is 300Gb

And Second LUN as All Tier ( with Spinning disks) non-performance applications.

Also what raid levels i need to set up for the first LUN and second LUN?

Please suggest what is the best way of setting up this.

Any help and sugessions would be great!
Trouble creating a mirror image on SSD...

Im trying to create a mirror image of an existing 1TB hard drive onto a new SSD (also 1TB) have done this before with macrium and it went well.. I have the SSD and HDD connected on sata ports, however this time around ive got stuck..

Machine is Win7 (64bit) and the existing drive is partitioned into a C and D drive of equal size, so far so good...

When i first ran up macrium (free version), noticed that the D drive was 75% used whereas the C had much less data on it, so i wanted to adjust the partition somehow (this might be where ive gone wrong), so initially i used macrium to adjust the C partition down in size by 100GB, but this seemed to leave an unused area of 100MB which i couldnt add to the D partition (or at least i couldn't find a way to do this).  Started macriums mirror function in the hope it might spot the spare space and allow you to add to the D partition, but this didn't seem to work, so i went back and increased the partition by all the extra free space you could select using drive properties/tools. Seemingly back to where i started (maybe!).

I now ran the mirror and it started copying the partitions okay, got thru the first 2, and started on the 3rd partiton (C drive), then part way thru (always at 3% after it perhaps has copied the FAT, it pops a "Clone failed error 9 read failed.." message (interestingly this is read rather than write so would this be from the old HDD or SSD?.

The error always…
We have a production site and a DR site.

We have Veeam installed on the production site. Among others, Veeam replicates the backups and the virtual machines to the DR site.

As part of the replication jobs, I would like to know if we can replicate the Veeam backup server to the DR site and if there is any best practices that must be followed for such scenario.
App to Backup network drive from PC

Hi there, so a small shop has some data in a small NAS. Pretty much they use the NAS so that other people in the office can access the info. I'd like to backup the data, I'm wondering if any desktop cloud backup application also backup a mapped drive. I know code42 used to allow that but they phased out the endpoint application. Does anyone know of any other app doing it?

We had a hypervisor suffer a catastrophic failure over the weekend. Two drives in the same RAID-5 failed within minutes of each other, so all of the VHD's are gone and we have to go to backups. The last backup happened a few hours before the failure. Who knew the restore was going to be the nightmare? I am using BE 15, and having the most difficulty with the database restore. This backup is not a file backup- the file backup does not contain the database MDF and LDF's. I have to restore it by choosing the "Database Restore" option, not the file restore option in BE. When I restore the Master database, the SQL service goes down and will not start because the other files do not exist. It refuses to restore the other files that Master needs to start unless Master has been restored first. I must be doing something wrong. Any idea what it is? (Besides using BE in the first place?)
Hello and Good Afternoon Everyone,

           I have an Acer Google Chromebook which does not have an internal  cd or dvd rom unit.  Instead, it has a USB port which can be used for installation of programs or back up of end user data like documents, music, pics, etc.  At any rate, I have some games on cds and dvds which I want to use on my Acer Google Chromebook.  Seeing that it does have a USB port, I am wondering if there is a way of digitally cloning my game cds to USB flash drives for installation and use on the Acer Google Chromebook.  

            Thank you

Windows Server Backup on Server 2008 R2.

If I restore the entire directory tree to another drive, will the permissions be maintained?
I'm looking to find out how Amazon RDS SQL databases can be backed up at the DB level.

RDS backups up the entire INSTANCE, recovery is restore the instance to a NEW instance, backup the database and restore the database to the corrupt instance.  The expectation here is that the entire instance is corrupt or only one application or one database resides in the instance.  Real world tests have put us at about 3 to 4 hours to recover some of the larger databases we have.  

Traditional database level backups would only take us 1 hour to recover the same situation.  

Is there a tool/method or best practice to backup the databases individually? Do we need a local backup tool that can reach into RDS?  Any suggestions?

Storage Software





The term "Backup" means the methods and processes involved to copy computer data (system data as well as application data) to media other than the ones where the data originally live (disk, tape, optical, cloud). "Restore" in turn means the methods and processes involved in data recovery, i. e., bringing back copied computer data to their original location. Backup/Restore primarily serves as a means of protection against data loss, be it due to disaster, corruption or sabotage. It can also be used for recovering data from an earlier point in time and even for cloning machines or applications. There is a wide variety of backup/restore software available, from expensive commercial products to free or open source tools.