RAID stands for Redundant Array of Independent Disks.

RAID is data storage technology that allows multiple drives to be used together as a single virtual drive for reasons such as fault tolerance, reliability and performance.

There are several different levels of RAID that determine how the data is stored and the level of redundancy achieved across the drives.

Share tech news, updates, or what's on your mind.

Sign up to Post

I have a dell PowerEdge R510 Server with 3 hard drive Raid 5
One of the drives is flashing an orange and green light.
I am going to back up two servers and maybe 3 workstations to it nightly. Don't need a lot of bells and whistles. Maybe 5-bays with RAID 5 or 6. Want something rock solid reliable and easy to maintain. I look at QNAP and there are just too many bells and whistles. Too many different models. Any suggestions?
 I am trying to build a new server with three Micron 960GB SSD drives doing Raid 1 + hot spare. The motherboard is Supermicro SM X11SCH-F on Supermicro SM 733 TOWER CHASSIS.
 The question is whether to utilize built-in RAID controller to do what I want to do or buy more expensive controller like "LSI Logic LSI00344 9300-8i SGL SAS 8Port 12Gb/s".
 In the past, I always purchased  "LSI Logic LSI00344 9300-8i SGL SAS 8Port 12Gb/s".
 However, I am getting SSD drives which are so much faster than SATA or 12Gbps SAS, so I am wondering if there arereal benefits of spending $700 for  "LSI Logic LSI00344 9300-8i SGL SAS 8Port 12Gb/s".

 I would appreciate your insight.
My small business is getting a Dell Poweredge T440 and we were advised to use Raid 1 for the OS and Raid 5 for the other drives.

We are going to get 8, 3.5 inch slots for HDs. OS will use 2 drives and Raid 1. Then we want 4 drives to be set up with Raid 5.

Does this give us 4 drives and 2 "hot spares"?
I'n not familiar with Proliant servers. I seem to be getting mixed info on drive array on HP Proliant DL380 Gen 5 server. The front panel drives are listing these as 60 GB drives. There are 5 of them. If I go into the system management home page, on the array controller, it lists the physical drives as 466 gb (I assume 500 gb) drives and there are 5 listed in the physical drive listing. I will attach a screen shot of the management console. I'm also attaching a shot from the front of the server and the E200 array that shows no physical drives. Not sure why there are 2 array's in the computer. The first was an P400 array.

One of the drives in the front panel is showing a solid green LED light on. From what I've been able to locate, that indicates the drive has failed and has been taken offline by the controller. Is that right?

If I go to replace the drive, what do I actually use? I see that there are 60 GB hot swap drives that were made for that unit but the management console is giving me a different picture so I'm not sure what to try.

Any advice would be greatly appreciated.


I noticed a couple reviews with Highpoint's new M.2 NVMe RAID controller . What I noticed  was that there were testing with consumer NVMes which begs the question about what is going on with TRIM/Garbage Collection.

   Everyone was staying away from consumer SSDs in servers primarily because RAID controllers didn't support TRIM. Have they made any major improvements to TRIM or Garbage Collection?  Do any RAID controllers at this point support TRIM? or are you still limited to Enterprise SSDs/NVMes in servers for power protection and garbage collection?
I am running Exchange 2016 on Server 2012 R2 Std.

My server has 3 RAID Volumes.

relative to "diskpart detail disk":

Disk 0 (2 partitions, Volume 0, Volume 1)
Disk 1 (1 partition. Volume 2)
Disk 2 (1 partition, Volume 3)

My event viewer (log = SYSTEM) has the following warning that comes approximately in a cluster of 4 about once per day:

What does "\Harddisk3\DR5" refer too?  How can I find out?

Is it Volume 3, physical disk 5?

- <Event xmlns="">
- <System>
  <Provider Name="disk" />
  <EventID Qualifiers="32772">51</EventID>
  <TimeCreated SystemTime="2019-03-26T02:02:58.093233900Z" />
  <Security />
- <EventData>
People today are virtualizing servers just for the sake of virtualizing. I started playing. I have an old Server laying around. E3-1230, 16GB memory, Adaptec 6405 controller with three new Seagate 7200rpm SAS drives in a RAID 5 configuration. It is dated but not a real slouch. So I installed Server 2019 Standard the installed it again as a VM and migrated my Server 2012 to it. First thing I noticed on my workstation was that things were running slower. Not to a dead stop but noticeably slower.

   So what the heck. I virtualized a Windows 10 Professional workstation. It doesn't do much but it is so slow doing everything it is painful. I would kill myself if I had to user it every day. Between the virtualized server and workstation I have to be doing something wrong. They just shouldn't be that slow what what I have heard. Both are Hyper-V. I have to be missing a basic setting. Any general help or guidelines on what I could be missing?
HP Proliant ML350p Generation 8 Server, I can't seem to figure out why it won't boot to a USB stick. In the one time option it says USB Key but when I select that it immediately tries to boot of the raid volume I created. Stumped big time. Anybody run into this ..

The setup is pretty simple, 4 drives 2 raid one configurations. I have looked in BIOS and I don't see any sort of legacy option..

Any help would be appreciated ..
 I am going to buy HPE PROLIANT ML110 GEN10 SOLUTION - TOWER - XEON SILVER 4110 2.1 GHZ - 16 GB server (Part# P03687-S01) without a separate RAID controller (likeHP 830824-B21 Smart Array P408I-P S) to cut down on cost.

 Last time, I purchased HPE MIDLINE - HARD DRIVE - 1 TB - SATA 6GB/S (Part#: 655710-B21) for this server.
 However, this time I wanted to get 2TB HD of the same kind, but ran into  "HP 2TB 7200 rpm SAS-3 3.5" Internal SC Midline Hard Drive (MFR # 818365-B21). It is SAS even though it is only 7200 RPM.

Is there an expert out there who have used this hard drive? If yes, is it:
(1) Compatible with HPE PROLIANT ML110 GEN10 - TOWER - XEON SILVER 4110 2.1 GHZ - 16 GB?
(2) From performance standpoint, is it worth paying more to get this HD over a typical HPE 2TB SATA like "HPE Midline - Hard drive - 2 TB - hot-swap - 2.5" SFF - SATA 6Gb/s - 7200 rpm (Part# 765455-B21)"

 I am going to get three of these and set up RAID 1 with one hot spare and will run Windows Server 2016 with two virtual servers (first VM - W2016 Domain Controller, second VM - W2016 Terminal Server running Quickbooks and other Tax software for two concurrent users).

Hi Experts,

I'm trying to build a raid 5 array for our file server using the following:
1.  Dell Poweredege R530 - server
2.  Windows Server 2016 - OS
3.  3x 4TB SAS - Hard Drives
4.  PERC H330 mini - embedded raid controller of dell server

But in my inquiries from diff. forums, I get these ideas:
1.  There's a risk in Raid 5 using large disk, like rebuilding array takes times and probability of failing another disk during the rebuild is more likely to happen.
2.  And raid controller without cache is not wise to use in a parity raid.

Since the PERC H330 is an entry-level raid card which does not have the cache. Is it more wise to use software raid in this situation or stick to using the built in raid controller?
Should I go building the Raid 5 array with 4TB disk?
What's should be the best option for this?


Diagram of the Setup
Hey guys

I setup 2x SSD's samsung 1TB on raid1 that is gonna be used for a few VMs for the next 2 months.

i setup raid 1 with BTRFS. Setup a single lun with advanced features. connected to it with ms iscsi initiator.

Everything works. I see the drive, i initialize it and format it with NTFS.

The Rackstation is connected with 1 gb cable to a gigabit switch. Jumbo frames 9000 enabled.

Big files transfer really really fast. back and forward. (maxed out 125 mb/s)

What i have an issue is, when i run atto benchmark on it, all the READS are maxed out at jokingly 5 to 11 mb/s??

The write speeds are doing better but not by much until i reach bigger files.

I am monitoring cpu and network usage on the GUI but there is absolutely no load / stress on the machine.

Supposedly this rackstation can do better than that?.

Here is what i have tried:

- Enable / disable jumbo frames = no change

- Trying out EXT4 instead of BTRFS = Gave around 20% boost but reads still stuck at 11 mb/s

- Trying different allocation sizes with NTFS (4k to 64k all the way) = small difference

- Trying ReFS = No difference

- Tried SMB3 mapped drive = no difference.

- Turning off all unnessercary services = no difference.

- Tried 3 different machines with different OS (server 2012R2, server 2016 and server 2019) = No difference

- Tried the raid "Sync faster" options = gave better results with normal operation but benchmark still shows bad.

- Tried directly …
User has two locations where they work on their photo library using Adobe LightRoom on Mac.  They are 8 months at first location, then 4 months at the second for the summer.

Can I have a Synology NAS at each location to synchronize their work?

Internet connection in NYC is FIOS, which currently is 600+ mbps using Speedtest both Download and Upload
Internet connection in MA (Massachusetts) is Xfinity which measures 30 Download and 6 Upload

They have about 5 TB of photos on a RAID directly attached to their computer.
I can use software to sync the RAID files to and from the Synology

I will do the initial synchronization of  the Synology units in NYC, and have them bring one to MA

Once in MA
User adds 1 to 2 GB of photos every few days, which will be synchronized to the NAS

At the end of the 4 months, when the user returns to NYC, I would like their photo library and LightRoom catalogs to be a mirror of what they were working on in MA, and ready to use.

Is this a good method?
Any steps missing?

Hard drives keep falling out of RAID 5. I have a custom storage array with a LSI 9260 4I and a HP Array 3GB SAS card.. Every once in a while after patching the server a drive is failed from the array. Check the physical cable and drive and it is still running. I have this mirrored with a synology disk array which works flawlessly. How and what is the best way to confirm if its the LSI Raid card or the HP Array that's causing the drives to fall out. By the way these are 3tb SATA hard drives enterprise level 128mb cache
I have a server that has 32GB of RAM and 8TB of HDD with RAID 1 for 4TB of total HDD space.

I would like to split that with 2 TB being NFS and 2TB being Samba.

This is sort of a discussion oriented question.

I was wondering about running NFS and Samba under LXD or Docker containers. Would there be advantages to doing this (at least for learning)?
I have a problem with server HP DL380 G4. My server does not show anything in display/monitor.
I removed memory/RAM and started the server I am not getting the beep sound. So I arranged another motherboard and I still didnt see anything in the screen/monitor.
What could the issue I changed power supply as well. I just want  to start the server and run P2V and convert to virtual.
Appreciate your guidance.
I'm looking into RAID solutions for a very large file server (70+TB, ONLY serving NFS and CIFS). I know using ZFS raid on top of hardware raid is generally contraindicated, however I find myself in an unusual situation.

My personal preference would be to setup large `RAID-51` virtual disks. I.e Two mirrored RAID5, with each RAID5 having 9 data + 1 hotspare (so we don't lose TOO much storage space). This eases my administrative paranoia by having the data mirrored on two different drive chassis, while allowing for 1 disk failure in each mirror set before a crisis hits.

HOWEVER this question stems from the fact, that we have existing hardware RAID controllers (LSI Megaraid integrated disk chassis + server), licensed ONLY for RAID5 and 6. We also have an existing ZFS file system, which is intended (but not yet configured) to provide HA using RFS-1.

The suggestion is to use the hardware raid to create two, equally sized RAID5 virtual disks on each chassis. These two RAID5 virtual disks are then presented to their respective servers as /dev/sdx.

Then use ZFS + RFS-1 to mirror those two virtual disks as an HA mirror set (see image)

Is this a good idea, a bad idea, or just an ugly (but usable) configuration.

Are there better solutions?

Our current RAID devices are getting end of life, and I’m looking into the possibility of upgrading to a more robust system. The important goals are high availability.

The image shows one possible route. I'm investigating:
1. Is there a better way to do this? (Or is this a bad way to do it).
2. The best tools for accomplishing this.

IF POSSIBLE, when all four enclosures are working – I’d like all 4 ethernet cables to be serving data – to overcome bandwidth problems.
suggested setup
On the left (current) is our current setup.
Top: RAID controller with 24 slots (+ 2 SSD slots in the back for the OS).
Middle: The 12 slot chassis (which may be failing)
Bottom: chassis (without motherboard) with 24 slots.

Can you advise a setup for something like this?
I've inherited a MegaRAID SAS 2208 24-drive-bay RAID, that is also attached to two additional enclosures (one 24 bay, and one 12 bay).

I attempted to load 6 new drives into 6 consecutive empty drive bays on the 12 bay enclosure (bays 6 - 11) and all 6 showed a solid red error light, and the admin mailing list got 6 messages...

    Controller ID:  0   Phy is bad on enclosure:   2  PHY
    Event ID:185
    Generated On: Tue Feb 12 17:02:27 CET 2019

    System Details---
    IP Address:
    OS Name: Linux
    OS Version:3.13
    Driver Name: megaraid_sas
    Driver Version: 06.700.06.00-rc1

    Image Details---
    BIOS Version : 5.37.00_4.12.05.00_0x05180000 Firmware Package Version: 23.9.0-0015 Firmware Version : 3.220.05-1881

Looking up the event ID in the documentation gives me:

    Enclosure %s phy %d bad
    Logged when the status indicates a device presence, but there is no corresponding SAS address is associated with the device.

- I put the 6 new drives in one of the other enclosures, and they worked fine (so not bad drives).
- There are 6 drives operating fine in bays 0-5.

It seems odd that all 6 sequential drive bays 6-11 should be bad, but apparently NOT the entire backplane is bad.

**Is it possible for only half of the enclosure's backplane to have gone bad? Or is this a firmware (or some other configuration) problem?**
I have inherited a Megaraid SAS 2208 RAID (24 drive slots) with two additional external enclosures (one 12 slot, and one 24 slot)

The problem is; the 12 slot enclosure is demonstrating hardware problems. It is well out of warranty and has no support contract.

I'm considering buying a new enclosure and moving the disks. However, I don't know how to recreate the RAID6 virtual drive WITHOUT losing the data on the disks.

Is this even possible?
I have inherited a Megaraid 2208 RAID device with 7 virtual disks ranging in size from 7 to 14TB.

The file server hosting this RAID sees these 7 virtual disks as /dev/sda, /dev/sdb, etc.

The previous admin then used ZFS to strap all these drives into one giant pool, and then parse them out into datasets.

I'm new to ZFS. The question is; is it possible to determine what data is stored on which disks via ZFS?

For example, if we lost 6 consecutive disks in the array (a real possibility for one of the enclosures) - after repairing and replacing the disks, how could I know what data to replace specifically? Or would I have to do a restore on ALL the data held within the pool, and simply tell the restore software not to overwrite existing files?

Win10Pro/2 ext HDs G-Tech 1T.

I'm going to put two exactly the same HDs in a Raid 1. Can I share the raid on my LAN?

We had HP ProLiant DL380G5 and I have removed all the low capacity hard disks and now I have purchased 8 refurbished HP 300GB 6G SAS 2.5 inch Hard disk for this server.

Firstly, Plaese let me know what raid type would be best for this file server.

I want to leave one hard disk as a hot spare. Please can you post me tutorials to achieve this.
Thanks and any help would be great.

Secondly will i be able to install windows 2012 standard?

Restoring a ShadowProtect backup on to a new RAID5 array. This is Thinkserver TS430 with LSI Megaraid 9240-8i controller (no cache or battery backup) supporting a 3-drive RAID 5 with 450 GB Cheetah drives. Original array has been having issues, and We decided to replace with a set of 3 identical but "refurbished" drives from ServerSupply.
Backup of full 650 GB system volume (server only has one voluem) on to a SATA drive on same controller takes 2.2 hours. We attempted a restore to the new array and found that restore times were a full day or more- the transfer rate was only 5.75 MB/s.  Is this normal for this hardware or are we missing something in our configuration of Shadowprotect?  We have a bootable USB stick that is allowing us to do the restore.
My options are
1. Stick with the original plan and do the restore over a weekend to not have business impact to customer
2. Change configuration for restoring to another destination, like a Single or RAID-1 SSD
THe server is 5 yrs old and needs to work for another year. It has Small Business Server 2011 with Exchange and SQL Server for produciton use by 7 people office.
When I turn the PC off and the back on, it starts to boot, but then I get a black screen with "press ctrl+f to enter raid option rom utility". Pressing ctrl+f does nothing and the computer is stuck in a rebooting loop! Please help!


RAID stands for Redundant Array of Independent Disks.

RAID is data storage technology that allows multiple drives to be used together as a single virtual drive for reasons such as fault tolerance, reliability and performance.

There are several different levels of RAID that determine how the data is stored and the level of redundancy achieved across the drives.