Solved

MS SQL data file on SSD hard disk

Posted on 2012-12-21
28
1,388 Views
Last Modified: 2013-01-18
Dear sir,

Right now we have a study on the use of SSD on the .mdf or .ldf files, what is the choice of you guys? use SSD for .mdf or .ldf files ?

if the size of SSD hard disk is not large enought, might not be good for both?

can we only use SSD for tempdB?

DBA100.
0
Comment
Question by:marrowyung
  • 12
  • 9
  • 2
  • +3
28 Comments
 
LVL 142

Assisted Solution

by:Guy Hengel [angelIII / a3]
Guy Hengel [angelIII / a3] earned 50 total points
ID: 38712430
I'm not an IT guy, so I cannot check this directly...
but others have done it:
http://www.ramsan.com/resources/whitePapers/44
http://www.novicksoftware.com/Articles/sql-server-ssd-solid-state-disk.htm
0
 
LVL 25

Assisted Solution

by:TempDBA
TempDBA earned 50 total points
ID: 38715984
In our organization, we have placed both data and log file for a single database which needs lots of I\O processing in a single ssd. This mayn't be a good solution, but to keep the throughput constant, we need to keep them in a similar disk. There might be other solution to this.
It can be used for tempdb if the temporary objects creation is more.
0
 
LVL 1

Author Comment

by:marrowyung
ID: 38716444
Hi both,

it seems that as long as the Windows server can detect it and mount it as a driver, SQL server can use it?

it seems that SSD will have a short life time than SATA hard disk , am I right?

So using SSD is not good for SQL server?
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 38716852
>it seems that as long as the Windows server can detect it and mount it as a driver, SQL server can use it?
yes

>it seems that SSD will have a short life time than SATA hard disk , am I right?
I cannot judge on that, I am not an IT/Hardware guy
0
 
LVL 17

Assisted Solution

by:xema
xema earned 50 total points
ID: 38725731
For what I've read, yes they have a shorter life sepan. Also the server controller is important You should try prepare a raid 0 And do daily backups, as the ssd can fail And not be recuperable
0
 
LVL 51

Accepted Solution

by:
Mark Wills earned 300 total points
ID: 38725843
SSDD's definitely have a shorter life span, for data that is being rewritten all the time then it can become a shorter life time than desired.

However, if you can manage your database properly - like set the size so it doesnt always shrink and grow - then for OLTP style transaction (ie write once, read many times) then the SSD is a terrific solution for your MDF.

Highly re-written data like the log files isnt quite as well suited. Though the speed is significantly greater and the average life span is worth the extra performance.

Would be inclined to use multiple SSD's - one for MDF (and manage that properly to ensure or maximise life and should last for ages), then another (others) for TEMPDB and LOG which are easy enough to recreate should there ever be a problem.

I am using SSD's and enjoy the performance greatly....
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38725853
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38725886
Oh, and found an interesting "best practices" from Dell (no doubt there are others like it out there) http://www.dell.com/downloads/global/solutions/public/white_papers/Dell_SQL_EqualLogic_SSD.pdf

And quite enjoyed reading that performance piece from AngelIII link to novicksoftware.
0
 
LVL 20

Assisted Solution

by:SelfGovern
SelfGovern earned 50 total points
ID: 38726047
Do NOT use RAID 0 for any kind of business data that is anything other than temporary, fully re-creatable without loss of data, business, money, or customers.

RAID 0 *multiplies* the risk of data loss by the number of disks you're striping across.   A 4-disk RAID 0 is at least 4 times the risk of losing all your data as compared to a single disk.  And you've already been told above that SSDs may be less reliable than spinning disk.
Key questions:
1) How much data can your business afford to lose?  The last transaction?  A few seconds' worth?  A minute's worth?  An hour's worth?   A day's worth?
2) How long will it take you to rebuild your database when one of your disks fails?  A few hours?   A day?  
3) How many sales/dollars/customers will you lose while you are rebuilding your database?

Hardware is cheap.  Add a fifth disk, make it RAID 5, and for a very low cost, you'll decrease your chance of losing data to far below the chance when you have a single disk.

Note that the EqualLogic white paper doesn't say anything about RAID 0.  Leave that for the L33t gamers who think they need every possible IOP, and who don't have any valuable data.
0
 
LVL 1

Author Comment

by:marrowyung
ID: 38726209
mark_wills,

Good to hear you again and havn't see your for a long time. Are you focusing on SQL 2012? how is that feel?

I watch your video on TED, nice to see your real face !  I am right now playing about also with Windows 8 pro and Windows phone 8 to see how well they work with each other.

I knew you are one of the admin of the EE site, I will follow rules, ahahh.

Your help always appreicated and I will spend time on it.

SelfGovern and xemam,

I think we should use RAID 10 for all kind of SQL volume, right? I think the read only volume should be RAID 5 in SQL reporting server case.

I knew the LSI has just released the first 6GBps SSD enhanced RAID card for it, I think good to use ?

DBA100.
0
 
LVL 1

Author Comment

by:marrowyung
ID: 38726286
mark_wills,

"
However, if you can manage your database properly - like set the size so it doesnt always shrink and grow - then for OLTP style transaction (ie write once, read many times) then the SSD is a terrific solution for your MDF."

So putting the growth to unlimited is not a good idea? why read more than write is not good for SSD?

SelfGovern,

"1) How much data can your business afford to lose?  The last transaction?  A few seconds' worth?  A minute's worth?  An hour's worth?   A day's worth?"

The user will always says they can't even lost once transaction, but SATA spin disk also has the same problem, right?

"2) How long will it take you to rebuild your database when one of your disks fails?  A few hours?   A day? "

will definitely in few hours.

"
Hardware is cheap.  Add a fifth disk, make it RAID 5, and for a very low cost, you'll decrease your chance of losing data to far below the chance when you have a single disk.

You mean RAID 5 in SSD or Spin disk ?
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38726400
G'Day marrowyung

I am always not too far away, but yes, havent been quite as active as I should be in the Q&A space... But have been busy (things like TED and admin stuff).

And yes, have been playing with SQL 2012 - and loving it - some of the new features and language capabilities are fantastic (like the windows function first() ).

RAID 5 is not ideal for databases, RAID 10 is arguably best, and RAID 0 is a waste of time, RAID 1 for a bit of built in redundancy, but a lot of the huge performance gains we hope to gain with striped based RAID's might not be all they are cut out to be with SSD if you can contain your database on a single SSD (now, that is not entirely true or possible with large data).

Oh, and in case I was misunderstood (or said something wrong)... Reading is brilliant with SSD, more so than writing (by comparison on the SSD), which in turn is a vast improvement (even random) over traditional disks.

If you can size you database correctly, and keep it fairly clean... In doing that what you really are trying to achieve is minimal rewrite over same space. There is a thought process that if you rewrite over the same physical space you end up killing that part of the SSD. So, treat your SSD more like a bucket, always adding a bit more. That is pretty much what the database (data) part is essentially doing. You keep adding pages of new data.

And in a lot of OLTP databases, you dont often go back and rewrite a lot as much as add new transactions (and maybe maintain them with a few updates). A lot of OLTP's are hugely random by nature, and that is pretty well suited to SSD (particularly reads). Though, there are some that would say SSD is not as good for Random as it is for sequential - but that is gibberish - it is better at sequential and still a huge improvement (did I say huge) with random access over other disks.

Even within the database (set at an ideal size), there are pages trying to be reused (ie delete a row or three and the next couple of rows to be added might use that space). The there is a fill factor for indexes to allow for new inclusions. So, you need to consider exactly what your database is doing. You might need to take the extra time to really study and create the architecture that best suits, including maintenance, index rebuilds etc.

With higher IO capability with the SSD, then some elements are possibly not quite so important. For example, internal fragmentation is not always a bad thing if you have a highly random requirement, so, no need to rebuild the indexes all the time "just because".

And really, the rewrite aspect of SSD's is not quite so critical as it used to be. The larger capacity means that new space can be consumed more evenly across the SSD to help reduce the need to reuse / rewrite limited space because of smaller capacities.

There is obviously a LOT more we can discuss and have been a bit overly brief with some of the above (forsaking some detail at the expense of accuracy), but hope you get the general idea...
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38728519
Oh, and just in case you didnt realise, not all SSD's are the same...

The entry level consumer SSD's are normally MLC (read "more affordable") and the enterprise are SLC (read "more expensive"), though, there are manufacturers out there with "smart" MLC's that try to bridge that gap of affordability and reliability.

There is a great resource discussing all things SSD at : http://www.storagesearch.com/ssd-slc-mlc-notes.html with a variety of links and discussions.
0
What Security Threats Are You Missing?

Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily.

 
LVL 1

Author Comment

by:marrowyung
ID: 38730619
one thing, as you all are telling me that SSD data has shorter life time and can't recoverable once it is dead, then why you guys still use it for SQL *.mdf and *.ldf files?

so for the aim of safety, it seems only use SSD for tempdB is better as, if the disk is dead and data is not recoverable, then SQL server data still alive, right?
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38730665
Speed and for the laptop (yes I have gone both ways) Speed, battery, heat...

And it is not quite "shorter life" + "can't recover"

SLC is a lot more tolerant of re-writes so its life is considerably longer than a consumer level MLC.

Most of the enterprise quality SSD's (SLC or high end MLC) have some form of built in protection and fault tolerant handling and sophisticated algorithms for wear levelling.

Dont forget that the SSD has "smarts" built in. Most manufacturers now check to see how many times a block has been written to (or re-written) and creates a logical block elsewhere to make sure the entire "disk" gets used, meaning that wear leveling functions over the entire disk. Also, a bad write will generate an error and that is easily trapped by the SSD internals and will write to another location. So, it is unlikely that you get an accidental corruption because of a re-used block.

What is more likely is that you have reduced capacity over time (and we are talking years). That's where you will see terms like over provisioning.

But, you do need the "better" SSD's (and the budget to go with it) to minimise risk or maximise reliability (and peace of mind).

There is also a number of SSD suppliers touting HA SSD (high availability) for true enterprise reliability. But that is getting kinda expensive.

TMS does have a little white paper about reliability (and there are others) : http://www.texmemsys.com/files/f000252.pdf

If you want more directed advice, maybe you need to tell us more about what you are up to...

Exactly what environment are you planning, how much budget (just "small" or "plenty" will do), why are you now considering SSD ? What size databases/ type of activity etc...

Other wise, read some of those links and discuss with your supplier your concerns about reliability and what that SSD provider does to minimise (eliminate) any risks.

If you do have terabytes and money is not as much of a concern as reliability then lash out (big time) and get something like : https://www.ramsan.com/products/rackmount-flash-storage/ramsan-820 But it will cost an arm and a leg (and a few other body parts as well).
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38730674
Oh, and if tempdb dies, then your SQL server will have a few fits and errors until it is fixed.

So it is a bit of a false economy thinking that you can gain reliability by splitting up SQL components if you are concerned about the integrity of the system as a whole.
0
 
LVL 1

Author Comment

by:marrowyung
ID: 38761685
0
 
LVL 1

Author Comment

by:marrowyung
ID: 38785606
Mark will:

"If you can size you database correctly, and keep it fairly clean... In doing that what you really are trying to achieve is minimal rewrite over same space. There is a thought process that if you rewrite over the same physical space you end up killing that part of the SSD. So, treat your SSD more like a bucket, always adding a bit more. That is pretty much what the database (data) part is essentially doing. You keep adding pages of new data. "

how can we make sure that? I can't instruct the MS SQL to not write to the same SSD space, right?

"If you do have terabytes and money is not as much of a concern as reliability then lash out (big time) and get something like : https://www.ramsan.com/products/rackmount-flash-storage/ramsan-820 But it will cost an arm and a leg (and a few other body parts as well). "

good, we will use Fujisu.

Thanks anyway.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38785632
Now, you cant tell SQL not to do that, but you can :

1) give a reasonable size to start with to accommodate known activity
2) do not autoshrink
3) use autogrow but make it a reasonable "chunk" at each growth stage
4) Dont reindex and reorganise pages unless it is very important to do so
5) Do your transaction log backups in accordance with DR initiatives
6) Learn what your indexes are doing and manage fill factor to allow some growth

And there is a fair bit more to consider as well depending on what the business requirements / useage really is and the nature and type of database (ie OLTP v DW).

So, there is quite a bit that can be done and starts to cover some of the traditional DBA duties that we dont always see (depending on size of company and experience etc).

Yeah, that ramsan is "bewdaful" but costs a bucket. Most Enterprise SSD's are pretty good nowdays anyway, just make sure you get the full specs of what they do to help with wear levelling and such like.

Havent played with Fujitsu but they have always been amongst the reputable bunch...
0
 
LVL 1

Author Closing Comment

by:marrowyung
ID: 38786520
Mark_will,

It seems then, SSD is not very good for TempDB as we shouldn't do more write, but tempdB object creation seems very random and not write on the same bit all the time, right?
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38790167
Well, temdb gets used for all kinds of activity and will grow and shrink - even create new when SQL is started again.

But because of that, with the Enterprise SSD that support error handling and bad blocks (auto allocates a new write), then it isnt such a bad thing for SSD at all...

Just wouldnt put too much else with it and make sure it is a big enough SSD to cope with a possible reduction in the physical space.
0
 
LVL 1

Author Comment

by:marrowyung
ID: 38790453
"But because of that, with the Enterprise SSD that support error handling and bad blocks (auto allocates a new write), then it isnt such a bad thing for SSD at all..."

Mark_wills, by this kind of feature, is that mean the free space of SSD will decrease from time to time even nothing else new created ?
0
 
LVL 20

Expert Comment

by:SelfGovern
ID: 38790492
Q: Does the auto-bad block handling mean that available space decreases over time?

A: No; at least, not in any way that you will notice.  Enterprise class SSDs always are  over-provisioned, and the "extra" space is reserved for use in automatic wear leveling, and to allow for bad blocks.   A drive that you purchase as 90GB might actually have 128GB of space, but only 90GB is ever reported as available to the user.  The rest is reserved by the drive for its own internal use.

So until you've gotten a whole bunch of bad blocks, you'll never see a decrease in the drive's total capacity.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38790732
SelfGovern is quite correct...

You should not see any degradation in good quality SSD's because of over provisioning.

You might see it in lesser quality SSD's

There are calulations that can be performed on the basis of thousands (and thousands) of writes on a block before it becomes useless... multiply that by the number of blocks - even if just in the provisioning and you will find a lot of years.

But again, you need to check those specs carefully to make sure your SSD is a fit for your Enterprise requirements - they are out there without question...
0
 
LVL 1

Author Comment

by:marrowyung
ID: 38791949
SelfGovern,

"So until you've gotten a whole bunch of bad blocks, you'll never see a decrease in the drive's total capacity. "

so finally if the system used up all reserved block, it still start using more new block and from that moment, the availiable will be less and less.

Mark_wills,

"There are calulations that can be performed on the basis of thousands (and thousands) of writes on a block before it becomes useless... multiply that by the number of blocks - even if just in the provisioning and you will find a lot of years."

where can I find that?
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38792377
Well, you will need to get the specs from your SSD...

As a rule of thumb (maybe a bit generous), you should be able to expect write volumes (before it becomes "bad") which are actually called P/E Cycles for SLC about 100K /block, eMLC about 30K /block, MLC about 5K/block.

Trouble is that often a page (ie those 4k chunks) might be broken, and that can break the block (about 128k).

Acctually while looking, found a great PDF From DELL : http://www.dell.com/downloads/global/products/pvaul/en/Solid-State-Drive-FAQ-us.pdf

It says :

Life [years] = (Endurance [P/E cycles] * Capacity [physical, bytes] * Overprovisioning Factor) / (Write Speed [Bps] * Duty Cycle [cycles] * Write % * WAF) / (36 *24* 3,600)

Parameters:
Endurance, NAND P/E Cycle: 100K SLC, 30K eMLC, 3K MLC
Capacity: Usable capacity of the SSD
Overprovisioning Factor: Over provision NAND percentage
Write Speed: Speed of write in Bytes per second
Duty Cycle: Usage duty cycle
Write %: percentage of writes during SSD usage
WAF: Controller Write Amplification Factor

You can even reverse engineer a bit for the "unknown" bits and pieces. For example start with an acceptable life span (e.g. say 5 years) to work out the usage and see if that is reasonable for the business operations you are supporting.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 38792933
Ummm... just noticed that the conversion to years should be days * hours * minute * second

rather than (36 *24* 3,600) it should be (365.25 *24* 3,600)
0

Featured Post

Top 6 Sources for Identifying Threat Actor TTPs

Understanding your enemy is essential. These six sources will help you identify the most popular threat actor tactics, techniques, and procedures (TTPs).

Join & Write a Comment

Moving your enterprise fax infrastructure from in-house fax machines and servers to the cloud makes sense — from both an efficiency and productivity standpoint. But does migrating to a cloud fax solution mean you will no longer be able to send or re…
Great sound, comfort and fit, excellent build quality, versatility, compatibility. These are just some of the many reasons for choosing a headset from Sennheiser.
This video teaches viewers how to encrypt an external drive that requires a password to read and edit the drive. All tasks are done in Disk Utility. Plug in the external drive you wish to encrypt: Make sure all previous data on the drive has been …
This Micro Tutorial will teach you how to reformat your flash drive. Sometimes your flash drive may have issues carrying files so this will completely restore it to manufacturing settings. Make sure to backup all files before reformatting. This w…

744 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

10 Experts available now in Live!

Get 1:1 Help Now