Link to home
Start Free TrialLog in
Avatar of marrowyung
marrowyung

asked on

the RAID card configuration for read and write

hi,

I have a cached hardware RAID card from (LSI, name changed now!) and I have define 2 x RAID 1 and 1  x RAID 5 volumn. but I found the configuration of each volumn is not good enough and I just want to verify with you all.

1) C:\, RAID 1, system drive running Windows 10 professional, has properties like this:




User generated image
2) D:\, RAID 1, for application has configuration like this:

User generated image
3:) E:\ is a RAID 5 volume and has configuration like that:

User generated image
is it all good and my RAID card has SSD cache enabled:

User generated image
any suggestion on what setting C:\, D:\ and E:\ should have to give max performance.

Avatar of David Favor
David Favor
Flag of United States of America image

1) Use "Always Writeback" if you have a UPS connected to your RAID array.

2) Use fastest option available if you have no UPS connected to your RAID array.

LSI docs suggest "Always Writeback" provides fastest throughput... if I'm reading this correctly.
What does "not good enough" mean? How many IOPS per volume do you actually have? All drives solid-state or 15k spinning?
Avatar of Member_2_231077
Member_2_231077

"2 x RAID 1 and 1  x RAID 5 volumn"
From the screenshot it looks like you have 4 in the RAID 5 set so 8 disks in total. These cards often only have support for 8 direct attached disks so do you actually have a SSD or two for SSD cacheing?

Write-back with good battery or however they word it is best, no UPS needed for that as the BBU lasts a lot longer than most UPSs.

What SSDs are you using? Are they enterprise with power loss protection since without that neither BBU nor UPS are much use. Part no would help.

Avatar of marrowyung

ASKER

David,
1) Use "Always Writeback" if you have a UPS connected to your RAID array.

no UPS here.

2) Use fastest option available if you have no UPS connected to your RAID array.

LSI docs suggest "Always Writeback" provides fastest throughput... if I'm reading this correctly.

so LSI suggest fastest is Always writeback ? for RAID 1 or RAID 5?

ste5an ,

All drives solid-state or 15k spinning?
RAID 1 volumn (C and D, windows 10 c:\ and application on D:\ ) is SSD>

RAID 5 volumn is spinning.

andyalder,
From the screenshot it looks like you have 4 in the RAID 5 set so 8 disks in total. These cards often only have support for 8 direct attached disks so do you actually have a SSD or two for SSD cacheing? 

4 xSSD disk for C:\ and D:\ (RAID1 for each)

the other 4 x SATA spinning in raid 5

I enable SSD cache for RAID 5 before but overall performance seems slower, so I disable it .

Write-back with good battery or however they word it is best, no UPS needed for that as the BBU lasts a lot longer than most UPSs.

 I don't have external battery attached.

What SSDs are you using? Are they enterprise with power loss protection since without that neither BBU nor UPS are much use. Part no would help.

not enterprise, normal consumer one !


 

Consumer SSDs do not work well on a RAID controller, there is no TRIM support so the whole thing tends to stall while they do garbage collection. On enterprise SSDs there is more reserved space to compensate for this, the SSD manufacturer may have a utility to short-stroke them so you get less usable space and they have more background space to cope with this. They are probably TLC with SLC buffer, good for read and burst writes but poor for sustained write. They could even be QLC which are slower than spinning disks for sustained writes.

You have no SSD cache available so that setting should make no difference.
On enterprise SSDs there is more reserved space to compensate for this

seems always need to buy this kind of enterprise grade SSD if I have hardware RAID card !
price double up and is enterprise one always faster?

 SSD cache available
 I have as according to screenshot above, but I found tuning it ON is not faster but tuning it off.

What is the setup for?
Note you are running a workstation OS.
It is designed for user activity.

I.e. Using a personal vehicle to move cargo.

SATA drives, 7200 rpm using cache to speed up performance.
Similar to the SSD, are the hdds include TLER support desks with responsiveness to raid controller. I.e. When the controller sends a write X, the drive has a limited time to confirm receipt and completion of the task.

If you are using desktop type drives, they could and would arbitrarily get kicked out if the raid controller does not receive a timely response and assesses the error as fatal.

To complicate matters, you are mixing SSD and hdds in different arrays. Much depends on the card on whether or how it handles.
What is the setup for?
Note you are running a workstation OS.
It is designed for user activity.
yes Windows 10 pro PC.

SATA drives, 7200 rpm using cache to speed up performance.
as I said, disable it seems much faster for my 4 x disk RAID 5 spin SATA volumn.

If you are using desktop type drives, they could and would arbitrarily get kicked out if the raid controller does not receive a timely  
then will it looks to us ? PC seems hanged ?

I am expecting the setting in the screenshot above should change, anything need to change from what you can see.


Looking at the settings I can not tell. Others have already pointed out that your use of ssds could impact the controller performance .

Check the controller log if any to see whether it records issues related to SSD.
sorry , I think I have to tell you that now  I have no problem on any disk.

just want to make sure that the configuration is optimize for SSD or spindle disk.
Enterprise SSDs are twice the price because of the power loss protection circuitry, it's not a performance issue but a data loss issue. What is the model no of your SSDs?

All your SSDs are used as data drives so they are not available to use as cache, that you can select the SSD cache option when no SSDs are available as cache is probably a bug in the firmware.
What is the model no of your SSDs?

Scandisk

All your SSDs are used as data drives so they are not available to use as cache,

but it can be ON for my 4 x disk SATA spining disk.


Scandisk is a make not a model no but look at this review - https://www.tomshardware.com/uk/reviews/dramless-ssd-roundup,4833-3.html
they say both the 240GB Sandisk drives offer poor performance. Advantage is they're cheap. Note the review is 5 years old, they may even be using QLC by now. Only way to tell is take one off the RAID controller, put it on the motherboard SATA port and run a benchmark on it.

That you can turn SSD cache on is a bug in the firmware, it cannot use data SSDs for cache.
Going back to your original question regarding the config!
Using the Write-back Cache in the controller will give you better performance than using Write-through, especially with spinning disks - what happens is that with WBC, the controller signals i/o completion (ACK) as soon as the data is in the Cache. It will write the data to disk when it can!
Its this delay between the ACK and the write to the disk that is the issue, if the power fails during this delay and you do not have Battery-Backed WBC, or a UPS for the controller, your data will not be written to disk and will be lost, and you may also get file-system corruption as well
So Write-Back is good if you have power security, otherwise turn it off!

i will let the others debate the SSD issue!  :-)
andyalder ,

they say both the 240GB Sandisk drives offer poor performance. Advantage is they're cheap. Note the review is 5 years old, they may even be using QLC by now.

I guess that one is similar to the one I am using for my D:\ , 240GB only ! I buy that PC 6-7 years ago!

Only way to tell is take one off the RAID controller, put it on the motherboard SATA port and run a benchmark on it.
I am not going to this !! no need !

The reason I post this EE post is, I found out once I change some setting,e.g. disable SSD cache, the PC run faster! specially my RAID 5 volumn.

Gerald ,

Its this delay between the ACK and the write to the disk that is the issue, if the power fails during this delay and you do not have Battery-Backed WBC, or a UPS for the controller, your data will not be written to disk and will be lost, and you may also get file-system corruption as well

I don't RAID card battery! and UPS! so I turn it off! but I am more concern on performance by the RAID card setting.

i will let the others debate the SSD issue!  :-)
I want we focus on RAID card setting on this post instead of SSD.

the write back thing is good ! sth like this ! how about setting related to SSD?

marrowyung, since you have no SSDs available for cache you should have the SSD cacheing disabled. If you delete the virtual disk you have created for D: you could use those SSDs as read cache for the RAID 5 array, https://www.dell.com/support/kbdoc/en-uk/000126223/configuring-and-managing-cachecade-virtual-disks-on-a-dell-perc-h710-h710p-and-h810-raid-controller tells how to create SSD cache for Cachecade, it is from Dell but the settings are the same as Dell PERCs are LSI apart from the logo.
If you delete the virtual disk you have created for D: you could use those SSDs as read cache for the RAID 5 array,

a separate SSD volume/disk ONLY as a cache ?
SOLUTION
Avatar of ste5an
ste5an
Flag of Germany image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
. That is the idea of using a faster disk as cache.  

then quite expensive to have one more volujmn because of this!

so any other controller setting I should adjust ?
Cachecade was introduced when SSDs were low capacity and very expensive, all flash arrays were unaffordable but using one or two as cache for spinning disks made sense.

Nothing else to do for the RAID 5 except disable SSD caching.

On the RAID 1 SSD arrays If you have the latest firmware and the card is modern enough then try changing to the following to enable FastPath, this skips the RAID stack and reads and writes directly to the SSD, if the SSD has DRAM on it then writing to this DRAM is as fast as writing to the controller's DRAM after all.

Disk Cache policy: Enabled
Read Policy: Never
IO Policy: Direct IO
Write Policy: Write thru

It may make it much slower though if the SSD is DRAMless, we can't tell with SANdisk without pulling them apart.
 this skips the RAID stack and reads and writes directly to the SSD

is that means no RAID function at all?
 
changing to the following to enable FastPath
No matter it is READ in major or write in major volume?




FastPath does not disable RAID, it bypasses part of it to speed it up. You still get the same level of data protection you had before.
Oh and we haven't mentioned yet, but RAID-5 is now not recommended with drives over 750GB! Due to the increased risk of a second disk failing while the first failure is still rebuilding and the loss of all your data.

Use RAID-6 or RAID10  -  NB Disks are cheap compared with the value of your data.
RAID-5 is now not recommended with drives over 750GB! Due to the increased risk of a second disk failing while the first failure is still rebuilding and the loss of all your data.

when I have that LSI raid, RAID 6 is still new and therefore I don't do it! and you are saying RAID 5 has a size limit ? 750GB? any URL for it?

and you are saying after one disk in RAID 5 volumn failed then can't last for one more lost ? yeah it is true! but for RAID 6, failing of 3rd disk gives the same problem, right?

It is because of UREs on remaining disks that RAID 5 is not recommended for large disks. You can mitigate for that by telling the card to do a background parity check every night.

https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
Andy


User generated image


Disk Cache policy: Enabled

Read Policy: Never


IO Policy: Direct IO
Write Policy: Write thru
you can see that disk cache policy is greyed !



ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
only the RAID 5 volumn can has SSD cache setting enable:


User generated image
Of course you cannot enable SSD caching on an SSD array!
On the RAID 1 arrays put the settings back write-back as there is no DRAM cache on your cheap SSDs.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Gerald,

The RAID-5 Size limit will be limited by the controller, and will be greater than 750GB!

ok. the statement before seems telling me is  physical limit on RAID 5

750GB is a practical limit with spinning rust devices,  

my external disk is 2.72TB ! single disk only!

SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Though the RAID 6 runs into a similar situation down the road. deals with losing two drives while rebuilding following a loss of one.


I knew this one and that's why RAID 6 comes online, just like HP virtual RAID!

tks all..