RAID stands for Redundant Array of Independent Disks.

RAID is data storage technology that allows multiple drives to be used together as a single virtual drive for reasons such as fault tolerance, reliability and performance.

There are several different levels of RAID that determine how the data is stored and the level of redundancy achieved across the drives.

Share tech news, updates, or what's on your mind.

Sign up to Post

Hi, I have a Dell PowerEdge R630 with 6 disks (2 virtual disks with 3 disks each, both RAID5). Controller is PERC H730P Mini
One of tray caddy is broken, and I need to replace it.
I could put disk in "fail" state and replace tray caddy.
Since I can afford a downtime, I wondering if I can simply shutdown server, extract disk, replace tray, and power on again.
Will controller raid "notice" anything?
Thank you!
I'm running a domain controller using windows 2008 r2 in a raid 1 setup. This morning it crashed and won;t startup past a cursor on a black screen before crashing again. I've tried
 1 ,starting in safe mode  2) using redundant boot records 3)booting to second drive without any different result.
I then tried windows 2008 r2 install disc and repair computer option. No image was available. I then went to command prompt and tried SFC /scannow, I got error msg "A system repair is already pending..." I then tried "dism.exe /revertpendingactions" It tells me that ScratchDir may be too small to work, but I have atleast 3.5 gigs free on a 186gb SSD.

So now i'm stumped. Can someone give me a suggestion of what to do next?  Thanks
Short description: Dell PowerEdge T300 logical drives keep disappearing about an hour or two after boot!

I've got a Dell Poweredge T300 server with 20GB RAM, 1.50TB storage, and running Small Business Server SBS 2011. The main function of this server is handling Exchange 2010. Roughly 1 to 2 hours after boot the server slowly begins to crash.

At first it looked like the SQL server database was full and something was possibly swamping the processes causing the server to freeze. While watching the disk accesses in the Resource Monitor I saw that the logical drives (C: and E:) would suddenly disappear . Once that happened all functions on the server would slowly crash and I would have to do a hard reboot. The reboot would take about 20 minutes before I could get to the actual desktop to view the event logs, task manager and resource monitor. Everything would run like normal then after about an hour or two the crash process would happen all over again (e.g. - logical drives would disappear, explorer freeze, need to do hard reboot, etc.).

This led me to check the RAID controller (Dell SAS 6IR). At times it froze while I was trying to go through the menu options. Having dealt with this before with another Dell server I suspected that the RAID battery might be failing. Looking at Array SAS1068E it shows a status of Degraded and and 0% Syncd. As I type this I'm about to return to the office to confirm which physical drive has failed.

On Friday afternoon I got in touch …
I'm configuring a VMware vSAN  on a few Dell PowerEdge R720s with a PERC H710 Mini RAID Controller (embedded) running vSphere 6.5.  I'm trying to surface the drives (2 SSD and 6 HD) but there are restrictions on the H710 not allowing me to convert to non-RAID.  VMware documentation indicates that drives must be set up in a "pass through RAID" configuration.  So, do I get another RAID controller that allows for no-RAID drives or is there a way of creating Raid 0 drives or can I mirror the 2 SSDs (RAID 1) and RAID the other 6 HDs.  Maybe I'm old school but I tend to trust controller RAID more than software raid, but it's required so not sure I can get around it.
Need to make a VM out of a CentOS 6 asterisk box with 3 500GB SATA drives in RAID 5.  This gives me a 1TB drive that has 42GB used.
After some googling, it looked easy to use "dd conv=sparse" but that produced a 1TB disk image.  I loaded the dive on /dev/loop0 and tired to use gparted to shrink it, but gparted said it need gpart to see inside and gparted was nowhere to be found.  (The disk image in on a Centos7 machines.)

I have used TestDisk on the image with a few different parameters but it always comes up with results that don't match fdisk -l on the original machine.  (I have attached the output.)

I was going to try Clonezilla next, but I thought I would ask some expert advise before wasting more time.

So, what is the best way to get a 1TB RAID 5 image with multiple partitions into a VirtualBox VM with only the 42G of data? (I would make a 60GB virtual drive anyway.) fisk-output.txt

I have a server running Vmware ESXi on a Dell Power Edge R510.  It's setup with RAID with 4 10K SAS drives.  The capacity is 1.64 TB.  All 4 of the SAS drives have been recently replaced.  I'm not at all familar with Vmware as I've always used Hyper-V and other VM solutions.  There's only on VM setup running Server 2016.  Xeon X5675 (8 processors) with 48 GB RAM allocated to the VM.  The host has 64 GB RAM. 12 CPUs.  A software vendor (I'll call vendor A) recently upgraded their software on the server and it looks like they're using SQL 2012.  They have another db on the server that also uses SQL 2012 well call from vendor B.  There were a bunch of log errors from vendor B's database.  10 GB worth.  They cleared them but their storage benchmark said that the write speeds to the storage were around 22 Mbps.  On 10K SAS drives that seems excessively slow.  I've logged into the ESXi navigator and have spent a few hours going through all of the settings in there.  Is there a way that I can test if all four of the SAS drives are working properly?  Under Storage, Datastore01 and Monitor, there are a few lines from June 13th (nothing listing anything after that).  

Device naa.614xxxxxxxxxxxxxxxx performance has improved. I/O latency reduced from 47132 microseconds to 14417 microseconds.      Thursday, June 13, 2019, 03:10:40 -0700      Info
Device naa.614xxxxxxxxxxxxxxxx performance has improved. I/O latency reduced from 238928 microseconds to 47132 microseconds.      Thursday, June 13,…
So I have a brand new server and a Server 2019 Datacenter license. Two small-ish drives working together in a RAID 1 will be for the OS. That will be formatted as NTFS and 2019 DC will be installed here with the Hyper-V role (and that's it!). Then ... I have 6 SSD drives working together in a RAID5 and the only things that will live on this drive are the actual Hyper-V VM files.

I've read about how ReFS has come a long way since it was released with 2012, yet when you format a data drive in Windows, NTFS is still the default. So I wanted to ask - who out there has embraced ReFS for Hyper-V? Should I keep it safe and stick with NTFS? Or should I use ReFS?

This drive will never be used for anything other than storing Hyper-V files.

Also: If I go with ReFS would there be any potential compatibility issues if, say, I had to migrate a VM from an older server (having only NTFS) to this new server (having only ReFS) or vice versa?
Dell T320 Server with PERC S110 Raid 1- has a degraded RAID 1 Array. no Read ahead / write through.

Event logs: Found this error - physical Disk 0;0;0;1 controller 0@connector 0 has failed.

When I use <control> C at boot to enter the raid setup, it states both drives are on line.

question: how do I remove the bad drive and rebuild the array......

Open Manage tasksPerc S110
Dell H800 with suddenly 20% slower performance.  All cpu's are good, battery healthy, RAID array on MD1200 are healthy.  How do we check the write cache on the controller?  Feels like the old days when the EMC CX500 cache battery died and the cache was disabled.  

With this server, though, (running Solaris), we can't tell if the cache is enabled or not.
A client running a Dell R620 with three disks in a RAID 5 reported amber blinking light on one of the disks though the OS (ESX) was running fine. I suggested buying a replacement which was delivered complete with hot-swap tray. The client swapped out the drive, but the Amber light continued. I thought it might be tagged as Foreign, so I scheduled a visit.  
When I rebooted the server, I went into the RAID controller to clear the (New) Foreign disk config and add it as hot spare so that it would begin rebuilding the RAID.

However, when I got in there the first disk (0) shows as MISSING, the New Disk (1) as Failed, and the third (2) as Online. I shut down and re-seated all of the drives, but no change.

The RAID is in a FAILED State, and with RAID 5 (which this is) you can’t rebuild if two disk fail or are unreachable.

I tried putting the old disk back into Slot 1, but now when I reboot both Disk 0 and 1 show as MISSING.

I also receive "There are offline or missing virtual drives with preserved cache. Please chack the cables and ensure that all drives are present." message.

I read some articles about re-tagging the Virtual Disk. Does this sound like a viable solution?

Any help would be appreciated.
Hi I was wondering if there was a third party software solution to make a RAID array.  I know Windows Server comes with one, but I frequently have it fail with an error something like "the drives must be the same block size" even though they are two drives that came off the self and are identical.  

For an example, I use a program AOMEI Partition Assistant to manage the disks and find it much more comprehensive and faster than Windows disk management.

Thanks all.
Have an HP ML110 Gen9 that recently got it's VMware root password lost during a remote session password change.

Knowing the official password reset process is to reinstall VMware, I went local to the server, pulled out it's existing USB drive with the VMware host and put in a new blank one to install to. I matched the EXSI build version with the custom HPE image and was presented with two individual drives, not the single RAID that was somehow working.

I say somehow, because this machine, the ML110 Gen9 only has the B140i software RAID controller that VMware can't see.

But somehow this system had a VMware host that DID see it.

I've tried the most recent version of EXSI, the original matched build version, and gone back to 6.0 and as per the documentation VMware behaves as advertised and will not see the software RAID controller

I am completely confused as to how this host got setup with the B140i RAID controller, but now I can't get my client back operational.

Any ideas on how I might fool VMware back into seeing that RAID?

Hoping someone can help us here. We are going through the process of developing a new OS and part of the backup process we used to use when doing this was to pull the the 2nd drive in each array (Raid 1) and keep that safe. If for whatever reason we need to revert changes we can simply boot those disks and mirror them. Now that process worked nicely with the G8 servers - when we plugged the older disks in we would get an advisory message that certain slots where missing and we could continue in interim recovery mode - this worked every-time without fail for many years in many different servers.

Now we have a new Gen 10 380 LFF server with the P408i array controller and that process nhas stopped working (it did when we tested it at the start of the project) - the server gives us the same advisory message and when we accept (F2) to continue in interim recovery mode (see below) the system does not boot, it tries to PXE boot instead (indicating its not seeing an OS). When we look at the arrays controller everything seems normal for the fact it has missing drives. Arrays all there as normal with the expected error messages. It just wont boot from them.

Does anyone have any advice on what me might be doing wrong?

Here is what we have tried:
1.  Deleting the array config with all drives removed from the server
2. Adding one of the disks to another serer and creating a raid 0 array where it sees all of the data perfectly intact
3. While in the array config …
We have a DELL MD3000i array with dual-controllers (A/A). Last week, one of the controllers went bad due to memory parity error (or aka. Memory Consistency Error) and we took a huge performance hit. We ordered a replacement and swapped it out, the error was still there.  We thought we may have received a defective part, so we got another controller, but the error doesn't go away. We took out the battery and memory from the controller, waited 2 mins, then popped it back in, but still no luck. Does any one know how to correct this issue? Does something else needs to be done beside putting the controller in offline mode before swapping?

Summary : RAID Controller Module Memory Consistency Error
Storage array:  MDPLSHELP01
Component          reporting problem:  RAID Controller Module in slot 0
 Status:           Online
 Location:  RAID Controller          Module/Expansion enclosure, RAID Controller Module in slot 0
Component          requiring service:  RAID Controller Module in slot 0
 Service          action (removal) allowed:  No 
 Service action LED on component:  No
 Replacement          part number:  
 Board ID:  1532
 Submodel          ID:  63
 Serial          number:  63V10XX
Hi there,

I'm new to nimble and not a storage-guy. One of our Nimbe Storage controller has a failed disk without support from HPE Infosight. When I tried to remove the disk say from Slot #9 I got an error as below:

Nimble OS $ disk --listremove 9 --shelf_location B.P1.1
ERROR: Failed to remove disk at slot 9, shelf B.P1.1 This shelf is either not used or inactive.

Open in new window

As per below list Slot #9 was on failed state.

    9 2...RC5         HDD    1000.20 failed  N/A             
    10 13..RC5         HDD    1000.20 in use  okay        

Open in new window

On the other hand when I tried to remove  the disk from Slot #10 I was able to successfully the disk a shown below BUT not in slot #9 as per above (was it because the disk was not on failed-state?)

Nimble OS $ disk --remove 9 --shelf_location B.P1.1
Nimble OS $ disk --list --shelf_location B.P1.1 | grep FRC5
     9 2..FRC5         HDD    1000.20 failed  N/A             
    10 13..RC5         HDD    1000.20 removed N/A       

Open in new window

However, when I added a disk in slot #10 from slot #9, I got below error even with --force option:

Nimble OS $ disk --add 10 --shelf_location B.P1.1 --force  
ERROR: Failed to add disk at slot 10, shelf B.P1.1 No such object.

Open in new window

By default what's the RAID level config of Nimble Storage CS220G? Is it RAID 6+spare? How can I check the current RAID config of controller/shelf from cli to know the RAID level? If I'm going to reconfigure the current RAID level to RAID 5, do anyone can guide me how to do this step-by-step as I can't find any documentation anywhere or even from HPE Community forum.

Thanks in advance for anyone for your help.

Kind regards,

Is there a specific method to disconnecting a USB 3 RAID 1 drive from an old computer to a new Windows 10 computer? My fear is that once I reconnect the drive to a new computer I'll be forced to re-format the drive or create a new simple volume.

My raid setup is a Lacie 2big Quadra 8TB in RAID 1 on a windows 10 machine. I'm getting a new Win 10 pc to replace this one.
I have a RAID 5 system (Intel(R) C600+/C220+ series chipset SATA RAID Controller).

I received an error that the RAID is degraded and a drive has failed.

I can "Mark as normal" the failed drive.

What does "Mark as normal" mean or do?

I have a server running esxi with a RAID card controlling 36 disks.  I had 2 disks fail but had 4 disks set as global hotspares for the arrays.  I replaced the 2 disks and now I have 2 disks labeled UGOOD that I would like to make global hotspares using storcli.  The issue is that everytime I issue the command "./storcli /c0/e1/s9 add hotsparedrive" I get the following error back:

CLI Version = 007.0813.0000.0000 Dec 14, 2018
Operating system = VMkernel 6.0.0
Controller = 0
Status = Failure
Description = Add Hot Spare Failed.

Detailed Status :

Drive     Status  ErrCd ErrMsg
/c0/e1/s9 Failure   255 Device state invalid.

Open in new window

My disk layout is as follows:

[root@server:/opt/lsi/storcli] ./storcli /c0/eall/sall show
CLI Version = 007.0813.0000.0000 Dec 14, 2018
Operating system = VMkernel 6.0.0
Controller = 0
Status = Success
Description = Show Drive Information Succeeded.

Drive Information :

EID:Slt DID State DG     Size Intf Med SED PI SeSz Model               Sp Type
0:0       2 Onln   0 7.276 TB SATA HDD N   N  512B TOSHIBA MG05ACA800E U  -
0:1       4 Onln   0 7.276 TB SATA HDD N   N  512B TOSHIBA MG05ACA800E

Open in new window

I have Proliant ML310e Gen8 V2 server running Server 2008 R2 standard.   I want to run Windows 10 on it.  While booting to Windows 10 it stopped and said it was missing a media driver.   The computer has four drives in a raid array.  Could the driver that is missing after I hit "Install Now" on Windows 10 be for the raid array?  Any ideas?
Hi Guys,

We have an IBM System X 3200 M3 server with 2 x 500Gb hard drives installed.
The drives have been striped with RAID 0 running VMWare Esxi.

We now wish to install another 2Tb drive.

I need confirmation that the drive will be compatible with the server model, and that we would be able to install the drive, without interfering with the current RAID config.
I am not sure whether we'd be able to run 2 x separate RAID groups on this model?

Server model:  MT7328
Hard drive model:  42D0782      - IBM 2TB 7200 NL SATA 3.5" HS HDD
I have a new customer with Dell T30 server.
- Unfortunately they purchased this without consultation
- The server only came with 1 1TB Sata drive
- 8GB of RAM
- Prior to us starting to work together my client decided they wanted raid10 redundancy
- They purchased 3 additional 1TB drives from Dell
- Enter me
- The server does not support any kind of onboard raid
- I contacted Dell and of course they do not make any raid controllers for the T30

Any one have any thoughts about either of these controllers as an add-on option?

Again it's aT30 so it'll be limited but trying to make some lemonade outta the lemons I've been handed :)
We have a Tandberg USB 3.0 RDX drive and media for backups

Some RAID cards I've been considering
LSI 9341-4i
Intel rs3wc080Intel

PS sure we can look at Battery backed , cache, onboard RAM, etc but this is just to get some basic redundancy beyond a single drive in their server.

Thanks in advance.
I have a hp z800 and want to setup esxi 5.5 on it as a home lab but i am not sure what kind of disks to purchase and how to configure the built in intel raid (is this considered as software raid? If so i dont think ESXI supports software raid and I may need to purchase a physical LSI - pcie raid hardware card)  so that vmware will be able to see my hard drives as a datastore. I will also be running esxi off of a usb thumb drive.

I also believe 2TB drive is the largest siz drives i would get (i know some of it gets lost but how many 2 TB drives would i be able to use and what would be the expected available size after setting them up in raid and what it the best raid configuration to put it into Raid 5, 10?

Thank You in advance for your time!

I am trying to set up an OEM VMware server for in-house use with the following components:

ASUS Prime Z390-A motherboard socket 1151 motherboard
Intel Core i7-8700K CPU
32 GB RAM DDR4 2133 MHz

In addition, I am wanting to use an LSI MegaRAID SAS 9260-4i SATA RAID controller with SFF 8087 mini-SAS interface on the controller, and hook up a pair of SATA drives in RAID-1 configuration for the VMware datastore.  FYI I have 6 VMware servers running in house already with the same controller, but with different motherboards.

When I power the system on, the ASUS motherboard comes up OK and I can get into the mboard BIOS fine.  But the LSI card never shows up anywhere.  Normally the LSI card shows up in the overall boot process with a display of e.g. its BIOS version, firmware version, etc.   It will say something like "HIt Ctrl-H to enter controller BIOS".    But it is as if the card is not even in the system.  I tried 2 different cards and neither shows any sign of being in the system.  The system boots into the ASUS BIOS and stays there.  It does not show the card as a bootable device.  I tried booting with the controller without any drives attached, then with a SATA drive attached to the controller.  Same result each time - the controller is essentially invisible to the rest of the system.

Any thoughts on what to do to get the motherboard to recognize the LSI card?  Or is this mboard too new - or maybe too fast - for the LSI?

Thanks in advance for …
Dell PowerEdge 840 -- Server 2008 Standard - PERC 5/i RAID controller ----- does not reboot after Windows update.

I accepted the MS Update yesterday and when the server rebooted, it did not come back up.   It appears that either the RAID controller is bad or the configuration got toasted.   I am a fairly non-tech person with just enough knowledge to understand what someone is asking me to do and then performing it.   This server is our SQL server and has a database on it.   Our office is 'dead in the water' until this can be resolved.   Can you please help me?    I’m hoping that there is a simple way for the RAID controller to be instructed to ‘grab’ the drive configuration off the drive itself and then we’ll be back in action.   I called Dell for telephone support, but they told me that it would be $1,000.00 for 2 hours of telephone support.  Ugh!    We do not have that funds available.

Would appreciate your expertise!
I had this question after viewing NAS drive recommendation.

I'm looking for NAS recommendations. We currently have 2 Seagate NAS OS 4 NASes. They work quite well as long as they are on the same site we are backing up data from. However, we had one at a remote site so we could use it for offsite storage. However, we ran into an issue with the NAS throttling down during transfer. This would cause the transfer to take to long to complete. I wanted to buy another one but want to make sure not to run into the same issue.

Which NAS do you recommend? We will be looking to purchase one that will hold at least 8TB's of data and use Raid 6.


RAID stands for Redundant Array of Independent Disks.

RAID is data storage technology that allows multiple drives to be used together as a single virtual drive for reasons such as fault tolerance, reliability and performance.

There are several different levels of RAID that determine how the data is stored and the level of redundancy achieved across the drives.