RAID stands for Redundant Array of Independent Disks.

RAID is data storage technology that allows multiple drives to be used together as a single virtual drive for reasons such as fault tolerance, reliability and performance.

There are several different levels of RAID that determine how the data is stored and the level of redundancy achieved across the drives.

Share tech news, updates, or what's on your mind.

Sign up to Post

 I am going to buy HPE PROLIANT ML110 GEN10 SOLUTION - TOWER - XEON SILVER 4110 2.1 GHZ - 16 GB server (Part# P03687-S01) without a separate RAID controller (likeHP 830824-B21 Smart Array P408I-P S) to cut down on cost.

 Last time, I purchased HPE MIDLINE - HARD DRIVE - 1 TB - SATA 6GB/S (Part#: 655710-B21) for this server.
 However, this time I wanted to get 2TB HD of the same kind, but ran into  "HP 2TB 7200 rpm SAS-3 3.5" Internal SC Midline Hard Drive (MFR # 818365-B21). It is SAS even though it is only 7200 RPM.

Is there an expert out there who have used this hard drive? If yes, is it:
(1) Compatible with HPE PROLIANT ML110 GEN10 - TOWER - XEON SILVER 4110 2.1 GHZ - 16 GB?
(2) From performance standpoint, is it worth paying more to get this HD over a typical HPE 2TB SATA like "HPE Midline - Hard drive - 2 TB - hot-swap - 2.5" SFF - SATA 6Gb/s - 7200 rpm (Part# 765455-B21)"

 I am going to get three of these and set up RAID 1 with one hot spare and will run Windows Server 2016 with two virtual servers (first VM - W2016 Domain Controller, second VM - W2016 Terminal Server running Quickbooks and other Tax software for two concurrent users).

Price Your IT Services for Profit
Price Your IT Services for Profit

Managed service contracts are great - when they're making you money. Yes, you’re getting paid monthly, but is it actually profitable? Learn to calculate your hourly overhead burden so you can master your IT services pricing strategy.

Hi Experts,

I'm trying to build a raid 5 array for our file server using the following:
1.  Dell Poweredege R530 - server
2.  Windows Server 2016 - OS
3.  3x 4TB SAS - Hard Drives
4.  PERC H330 mini - embedded raid controller of dell server

But in my inquiries from diff. forums, I get these ideas:
1.  There's a risk in Raid 5 using large disk, like rebuilding array takes times and probability of failing another disk during the rebuild is more likely to happen.
2.  And raid controller without cache is not wise to use in a parity raid.

Since the PERC H330 is an entry-level raid card which does not have the cache. Is it more wise to use software raid in this situation or stick to using the built in raid controller?
Should I go building the Raid 5 array with 4TB disk?
What's should be the best option for this?


Diagram of the Setup
Hey guys

I setup 2x SSD's samsung 1TB on raid1 that is gonna be used for a few VMs for the next 2 months.

i setup raid 1 with BTRFS. Setup a single lun with advanced features. connected to it with ms iscsi initiator.

Everything works. I see the drive, i initialize it and format it with NTFS.

The Rackstation is connected with 1 gb cable to a gigabit switch. Jumbo frames 9000 enabled.

Big files transfer really really fast. back and forward. (maxed out 125 mb/s)

What i have an issue is, when i run atto benchmark on it, all the READS are maxed out at jokingly 5 to 11 mb/s??

The write speeds are doing better but not by much until i reach bigger files.

I am monitoring cpu and network usage on the GUI but there is absolutely no load / stress on the machine.

Supposedly this rackstation can do better than that?.

Here is what i have tried:

- Enable / disable jumbo frames = no change

- Trying out EXT4 instead of BTRFS = Gave around 20% boost but reads still stuck at 11 mb/s

- Trying different allocation sizes with NTFS (4k to 64k all the way) = small difference

- Trying ReFS = No difference

- Tried SMB3 mapped drive = no difference.

- Turning off all unnessercary services = no difference.

- Tried 3 different machines with different OS (server 2012R2, server 2016 and server 2019) = No difference

- Tried the raid "Sync faster" options = gave better results with normal operation but benchmark still shows bad.

- Tried directly …
I have a server that has 32GB of RAM and 8TB of HDD with RAID 1 for 4TB of total HDD space.

I would like to split that with 2 TB being NFS and 2TB being Samba.

This is sort of a discussion oriented question.

I was wondering about running NFS and Samba under LXD or Docker containers. Would there be advantages to doing this (at least for learning)?
I have a problem with server HP DL380 G4. My server does not show anything in display/monitor.
I removed memory/RAM and started the server I am not getting the beep sound. So I arranged another motherboard and I still didnt see anything in the screen/monitor.
What could the issue I changed power supply as well. I just want  to start the server and run P2V and convert to virtual.
Appreciate your guidance.
I'm looking into RAID solutions for a very large file server (70+TB, ONLY serving NFS and CIFS). I know using ZFS raid on top of hardware raid is generally contraindicated, however I find myself in an unusual situation.

My personal preference would be to setup large `RAID-51` virtual disks. I.e Two mirrored RAID5, with each RAID5 having 9 data + 1 hotspare (so we don't lose TOO much storage space). This eases my administrative paranoia by having the data mirrored on two different drive chassis, while allowing for 1 disk failure in each mirror set before a crisis hits.

HOWEVER this question stems from the fact, that we have existing hardware RAID controllers (LSI Megaraid integrated disk chassis + server), licensed ONLY for RAID5 and 6. We also have an existing ZFS file system, which is intended (but not yet configured) to provide HA using RFS-1.

The suggestion is to use the hardware raid to create two, equally sized RAID5 virtual disks on each chassis. These two RAID5 virtual disks are then presented to their respective servers as /dev/sdx.

Then use ZFS + RFS-1 to mirror those two virtual disks as an HA mirror set (see image)

Is this a good idea, a bad idea, or just an ugly (but usable) configuration.

Are there better solutions?

Our current RAID devices are getting end of life, and I’m looking into the possibility of upgrading to a more robust system. The important goals are high availability.

The image shows one possible route. I'm investigating:
1. Is there a better way to do this? (Or is this a bad way to do it).
2. The best tools for accomplishing this.

IF POSSIBLE, when all four enclosures are working – I’d like all 4 ethernet cables to be serving data – to overcome bandwidth problems.
suggested setup
On the left (current) is our current setup.
Top: RAID controller with 24 slots (+ 2 SSD slots in the back for the OS).
Middle: The 12 slot chassis (which may be failing)
Bottom: chassis (without motherboard) with 24 slots.

Can you advise a setup for something like this?
I've inherited a MegaRAID SAS 2208 24-drive-bay RAID, that is also attached to two additional enclosures (one 24 bay, and one 12 bay).

I attempted to load 6 new drives into 6 consecutive empty drive bays on the 12 bay enclosure (bays 6 - 11) and all 6 showed a solid red error light, and the admin mailing list got 6 messages...

    Controller ID:  0   Phy is bad on enclosure:   2  PHY
    Event ID:185
    Generated On: Tue Feb 12 17:02:27 CET 2019

    System Details---
    IP Address:
    OS Name: Linux
    OS Version:3.13
    Driver Name: megaraid_sas
    Driver Version: 06.700.06.00-rc1

    Image Details---
    BIOS Version : 5.37.00_4.12.05.00_0x05180000 Firmware Package Version: 23.9.0-0015 Firmware Version : 3.220.05-1881

Looking up the event ID in the documentation gives me:

    Enclosure %s phy %d bad
    Logged when the status indicates a device presence, but there is no corresponding SAS address is associated with the device.

- I put the 6 new drives in one of the other enclosures, and they worked fine (so not bad drives).
- There are 6 drives operating fine in bays 0-5.

It seems odd that all 6 sequential drive bays 6-11 should be bad, but apparently NOT the entire backplane is bad.

**Is it possible for only half of the enclosure's backplane to have gone bad? Or is this a firmware (or some other configuration) problem?**
I have inherited a Megaraid SAS 2208 RAID (24 drive slots) with two additional external enclosures (one 12 slot, and one 24 slot)

The problem is; the 12 slot enclosure is demonstrating hardware problems. It is well out of warranty and has no support contract.

I'm considering buying a new enclosure and moving the disks. However, I don't know how to recreate the RAID6 virtual drive WITHOUT losing the data on the disks.

Is this even possible?
I have inherited a Megaraid 2208 RAID device with 7 virtual disks ranging in size from 7 to 14TB.

The file server hosting this RAID sees these 7 virtual disks as /dev/sda, /dev/sdb, etc.

The previous admin then used ZFS to strap all these drives into one giant pool, and then parse them out into datasets.

I'm new to ZFS. The question is; is it possible to determine what data is stored on which disks via ZFS?

For example, if we lost 6 consecutive disks in the array (a real possibility for one of the enclosures) - after repairing and replacing the disks, how could I know what data to replace specifically? Or would I have to do a restore on ALL the data held within the pool, and simply tell the restore software not to overwrite existing files?

Busting 5 common myths about IT jobs.
Busting 5 common myths about IT jobs.

Ignore popular stereotypes about what it’s like to work in IT. It’s a tech-driven world, and tech-based jobs are among the most diverse, and rewarding as you can get. Think you’ll be holed up in a basement, staring at a computer while outsourcing threatens your job security?

Win10Pro/2 ext HDs G-Tech 1T.

I'm going to put two exactly the same HDs in a Raid 1. Can I share the raid on my LAN?
Restoring a ShadowProtect backup on to a new RAID5 array. This is Thinkserver TS430 with LSI Megaraid 9240-8i controller (no cache or battery backup) supporting a 3-drive RAID 5 with 450 GB Cheetah drives. Original array has been having issues, and We decided to replace with a set of 3 identical but "refurbished" drives from ServerSupply.
Backup of full 650 GB system volume (server only has one voluem) on to a SATA drive on same controller takes 2.2 hours. We attempted a restore to the new array and found that restore times were a full day or more- the transfer rate was only 5.75 MB/s.  Is this normal for this hardware or are we missing something in our configuration of Shadowprotect?  We have a bootable USB stick that is allowing us to do the restore.
My options are
1. Stick with the original plan and do the restore over a weekend to not have business impact to customer
2. Change configuration for restoring to another destination, like a Single or RAID-1 SSD
THe server is 5 yrs old and needs to work for another year. It has Small Business Server 2011 with Exchange and SQL Server for produciton use by 7 people office.
Had a glitch this morning with our raid 5 array.  I resynced the array and said all was good but now there are two exact array using the same drive.  I fear if I delete one it will delete both.  Its an adaptec drive array controller.  I have vmware on the server and its not seeing the config to boot correctly because of the ghosted array.  Any ideas.
Hi. I have an older HP ProLiant server and have a quick question about RAID.
The server has 8 hard drives.
The first 2 were used for a RAID1 setup and that's where the OS was installed (Server 2008 R2).
The other 6 drives were setup in a RAID10 config and housed a COPY of the company's data.
I decided to replace the first 2 drives with SSDs and was thinking that the other 6 drives would still be accessible after the OS was reinstalled on the 2 x SSDs.
I noticed when I initially booted into the RAID setup, I saw a message stating that some drives and configs were removed and such.
It's not a HUGE deal to recreate the RAID10 config on the other 6 drives and copy all of the company data again, but it is very time-consuming.
Just wondering if I have any options before I recreate the RAID10 config and copy everything all over again.
Thanks in advance.
Have a Lenovo TS430 server with LSI Megaraid controller and 3x 450 GB 15K SAS drives in RAID-5. Server has been working for over 5 yrs and recently had started to run out of space on the HD volume. I did a cleanup and removed lot of files recently. Ran a De-frag to improve fragmentation. Later that night, it failed at maybe when defrag was going on.
Found that 2 of the 3 SAS drives were not recognized by the RAID controller as part of the Boot array. There we two disks showing up as "Configured, Bad". Was able to "import" the "Unconfigured, Bad" disks into the array and brought them "Online" using the WebBios for the LSI controller. Mde sure the array was set as the boot device in the RAID menu.

Optimistically, tried to boot into Windows Server 2008R2 installed on the server.. Boot Failed with Windows saying boot device cannot be accessed. Went back into the LSI WEbBios, and now it reports that one of the 2 drives that I imported is being "rebuilt" into the array. Hoping that perhaps that will restore booting. However, since this is a hardware raid-5, cannot understand why Windows is not booting even if array is being rebuilt.  Windows boot screen suggests using Windows Server boot CD to do a boot repair. Waiting for rebuild to complete before I try going further.

Any advice?  Thanks in advance. This has been a nightmare.

System is a PowerEdge R710.with PERC H700 integrated controller.  The virtual drive in question is RAID 5 with 3 drives. I have been monitoring controller logs weekly, and running consistency check for the past 3 months with 0 issues.  Suddenly yesterday, OMSA is showing 1 drive as predictive failure.  I went through logs and see a bunch of unexpected sense logs many of them stating "corrected medium error", but I do not see any unrecoverable errors.  I went onsite, offlined drive, then inserted new Dell branded drive.  Rebuild completed, but the new drive is also reporting as predictive failure.  I went through logs again, and notice many more unexpected sense logs during rebuild process.  I then ran consistency check, which again had many unexpected sense logs.  Drive is still in predicted failure state, so I replaced drive again with a new drive.  This time, after rebuild drive was not showing predictive failure.  Just to be sure, I ran another consistency check, which put same drive in predictive failure state again.  There are again a bunch of unexpected sense logs many of them stating "corrected medium error".  I am unsure how to proceed.

The firmware for controller, drives, & BIOS are all up to date, but IDRAC6 & Lifecycle are out of date.  IDRAC6 is at 1.92.00 (build 5) and Lifecycle is at

Please let me know your thoughts.  If you recommend updating IDRAC6 & Lifecycle, please let me know where to find updates and steps for updating.  I am a …
I am ordering a new server.  It will be Win 2016 and only be used as a DC.  It will have three drives.

My question is should it be Raid 0 or Raid 1?
What is the best config for VMWare 6.7 for the following hardware: HP DL380 gen 10, 2 processors (8 core) with 64 GB memory each, 6 x 960 GB SSD, 2 x 4 ports networkcard

I am talking about :

1) Which raid config (Windows servers and some Linux machines will run on as a guest, 10 in total)
2) Network setup (teaming or not),
3) How to get the best performance out of this hardware config
I have an issue of requiring more space on my primary partition.  I have a Dell 410T, 4 x 300GB 10K RPM SAS 6Gbps 2.5in Hot=Plug Hard Drive, PERC H700 Adapter RAID Controller 512 MB, RAID 5, Total of 837 GB 40 GB Primary partition on C: and 794 GB on D (525 GB Free). I would like to shrink D and expand C by 60 GB.  The server is running preinstalled software.  Is there a way I can do this without destroying the existing disk structure?
Get a highly available system for cyber protection
Get a highly available system for cyber protection

The Acronis SDI Appliance is a new plug-n-play solution with pre-configured Acronis Software-Defined Infrastructure software that gives service providers and enterprises ready access to a fault-tolerant system, which combines universal storage and high-performance virtualization.

Hi, I was going to build server with 3 2 Terrible drives and put them in RAID 5 to get about 4 T of storage and fault tolerance.  

I remember allot of techs here didn't like RAID 5 due to the amout of time it takes to rebuild and that another drive can fail in that time.  Is that still true, or was it ever with SSD's?

The controller I'm using is a PERC H730P

Thanks a ton guys
I have dell t620 with a perc h710. It is configured raid five with 6 930g sas drives with a capacity of 4.5t.

A few days ago i saw on a reboot the there was a foreign config on the drives. as the os was booting properly i cleared the foreign config.

After hat I noticed that the virtual drive is in a degraded status. Drives 1-5 are online but drive 0  appears unassigned with a status of "ready". the only option the controller gives me is to set it up as a hot spare. I tried that and it set up that way but it wont integrate to the basic drive. How do I get it to reintegrate?


I need to replace a six year old Dell Poweredge T620 server.  I was thinking about replacing it with another Poweredge T630, with this CPU:  Intel® Xeon® E5-2630 v4 2.2GHz,25M Cache,8.0 GT/s QPI,Turbo,HT,10C/20T (85W) Max Mem 2133MHz.  I want to use Hyper-V but I'm not sure if I should use RAID or if there really is a big enough performance advantage for the extra cost.  I was thinking about going RAID10 using the PERC H730 RAID Controller with 1GB NV Cache but is that a good idea to have the VHDX and host all on the same RAID10?

Would there be performance problems if I just had two stand alone SSDs?  One would have the host Windows Server 2019 and the VMs would be on the second SSD.  I understand that the speed difference between stand along drives and RAID10 (with 4 drives) is 4 times faster the read and 2 times the write.  But with the SSDs being so fast now, is it necessary to go with RAID10 for a small office?  I will be running an Active Directory/Application VM and a Remote Desktop VM.  There are a total of 15 users and 5 of them will occasional work on the Remote Desktop VM.  Thanks in advance for any and all suggestions, recommendations and criticisms!
What's the difference between a Data Center Internal Hard Drive vs a Network Attached Storage Internal Hard Drive? Can a Data Center Internal Hard Drive(s) be used in RAID? Out of curiosity, what brand hard drive(s) does Google in use their Data Center?
Going to set up a server for a small company...6 employees...
Poweredge T130....two 4Tb HD's in RAID 1...
Server 2016 Essentials...
This will be my first time on a server with RAID1...

Whenever I have set up RAID1 in the past on a working PC....I've done a software RAID with the OS...

But on a small you suggest a hardware RAID...???

Is there any real difference...???

Recommend a RAID card...???

Thanks in advance...
Hey guys,

we have a strange problem. We built a RAID 5 with 4 SSDs, fast initialized it and installed a Windows Server 2012 R2. The Serverview Raid Manager shows a INITIALIZED STATUS - NOT INITIALIZED of the Virtual Disk

The RAID controller is a PRAID CP400i Megaraid

How big is the problem?




RAID stands for Redundant Array of Independent Disks.

RAID is data storage technology that allows multiple drives to be used together as a single virtual drive for reasons such as fault tolerance, reliability and performance.

There are several different levels of RAID that determine how the data is stored and the level of redundancy achieved across the drives.