Link to home
Start Free TrialLog in
Avatar of NytroZ
NytroZFlag for United States of America

asked on

Database server configuration advise

I'm looking to buy a new server to run Oracle 11G.  What is the recommended hardware configuration for the server?  I've read that I should have at lest 3 separate sets of disks.  Should I use a SAN for this?  Can someone show me 3 separate options?  Low, Med, and High cost.
Avatar of slightwv (䄆 Netminder)
slightwv (䄆 Netminder)

These types of questions are really difficult to answer because only you know your specific requirements.

With that said, before we can even attempt to respond we need a LOT more information.

Just off the top of my head:
Number of database this should support?
Type of databases (OLTP, DSS, mix).
Number of concurrent users.
Size of database(s).
Estimated number of transactions or some period.
Where does the business logic for the apps reside (front end, middleware or in the database itself)?
etc...
Avatar of NytroZ

ASKER

I will get these answered tomorrow.

Thank you
> Should I use a SAN for this?

Unless you are shared storage clustering (and with Oracle you would use the shared nothing RAC clustering instead normally) then there's not any advantage in using a SAN over DAS as far as performance goes. Price-for-price DAS is always faster than SAN since in effect you are taking the disks out of the server and putting them in another box which makes them further away from the CPU when you use a SAN.
Avatar of NytroZ

ASKER

We are not using shared storage clustering.  We run  a single OTLP database that is currently about 300GB.

The business logic for the app runs in 3 layers.  The web server, review server, and the database server.  The database server does 50/50 reading and writing.  The server handles about 100 transactions/second.

Someone had suggested I have a mirrored set for the OS and Oracle app, a mirrored set for the log files, and a RAID 5/10 for the database.  Is this a common configuration?  Is there something better?  We would like to get something configured between $10-15K.
Oracle and RAID-5 typically isnt' the best.  Writes are slow in RAID-5.  You should try to get everything Oracle RAID-10.

That said:  I've run Oracle databases (with logfiles, etc...) on RAID-5.  I did it because of the sys admin staff I had and RAID-5 was the 'easiest' to recover when a disk failed.  I didn't trust them to break the mirrors to replace a disk and rebuild the mirrors in a RAID-10 setup.

Back to server specs:
300G for the database itself.  You then have archived redo logs and backups to account for.  We do disk-to-disk-to-tape backups and I keep 7 days work of backups and archived redo logs on disk at any one time.  Therefore, my backups take up more space than my actual database.

You need to think about this.

You didn't post anything about the proposed number of concurrent database connections.

Once you account for your disk space requirements and setup, you want as much RAM as possible.  You also want as many CPU's (and/or cores) as you can afford but Oracle License costs can drive this.

I've not priced servers for a while but $15K sounds reasonable.  I anticipate your disks will be the biggest chunk of this.

A 300 Gig database should perform well with 16 Gig of RAM.  Your mileage may vary...

You typically want the fastest disks you can get and as many of them as you can get.  Spend everything else on RAM and CPUs.
Avatar of NytroZ

ASKER

Do you think I should have  4 sets of disks?

1.  OS
2.  Archived redo
3.  Backup
4. Database
>>Do you think I should have  4 sets of disks?

Oddly enough, that is exactly what I have.

The one caveat: make sure they are 4 separate sets of physical disks.  With all the volume managers out there these days you need to be careful of what is actually going on what physical disks.

This can get expensive when you start talking RAID 10.  I believe the minimum number of physical disks for a raid 10 setup is 4.  So, 4 'sets' of RAID 10 is 16 physical disks.

There is also the school of thought that you can lump all the disks together and with striping reads and writes will even themselves over time.

Does it work?  I don't know.  Never really tested it.
You have your backup disks in the same server as production? Sounds risky to me, I like to see the backup on a separate system just in case the PSUs explode.

If it's only 300GB you may as well use SSDs in RAID 1 for data, logs are fairly sequential so physical disks will do for them and the OS doesn't do that much after it's booted so that would fit on spinning platters too. Sure the SSDs will wear out but you can pay for the replacements in a couple of years rather than up front.

You have 3 tiers though so that suggests virtualizing it so you can have a separate OS instance for each tier to avoid one process stealing all the RAM. Hardware-wise RAM is so much cheaper than it used to be that you can afford several 16GB sticks for a single server solution.
>>You have your backup disks in the same server as production?

Disk-to-disk-to-tape.  I perform a nightly database back to disk.  The nightly incremental tape backup picks those up followed by a weekly full tape backup.

>>logs are fairly sequential so physical disks will

Oracle archived redo logs are really important for point-in-time recovery.  You lose a single one, you can only recover to the point in time the log before the missing one has data for.

You want to do everything you can to protect them.  This is one reason I keep 7 days of backups on disk even though they are all already on tape.  Any single file I may ever need had better be on several different tapes.

I've seen way to many tapes snap when trying to read them...

>> that suggests virtualizing

Oracle has a pretty strict support policy when dealing with virtualization.  VMWare is not supported and only recently is Hyper-V on 2012 supported.

I suppose they sort of have to support Oracle's virtualization products but I've not seen anything that states that for a fact.

>>you may as well use SSDs in RAID 1

Trust me, I would love Solid State drive arrays but talk about pricey compared to the alternatives.  Remember, they only want to spend 15K total.  If prices have changed, I'll need to take a look myself!
Avatar of NytroZ

ASKER

Some interesting comments...

First, What do you mean by "VMware is not supported"?  Our current environment runs on ESXi 5.1.  Should I consider moving this to a physical server?  

Second, what do you mean by SSD's will wear out?  400 Gb SSD are $2400 each.

Would I be better off looking at a Dell 910 or a Dell 410 with a DAS(MD3200)?
NAND flash doesn't last for ever, after you rewrite a cell about 100,000 times it stops working, it's a bit like the milk-jug effect with batteries, if you keep filling and emptying a milk jug it stops working after a time because of all the gunk that sticks to the sides.

If a pair of 400GB SSDs at $5000 will do the job that's significantly cheaper than an MD1200 full of small/fast disks plus PERC H800. (MD3200 is SAN to get the required IOPS, MD1200 is DAS).

Is it possible to get the IOPS from your current system? That'll help a lot in sizing the disk subsystem.
Avatar of NytroZ

ASKER

I will try to get the IOPS data that you are requesting.
Avatar of NytroZ

ASKER

approximately 150 IOPS
OH, that's minimal, a single 10K disk can just about do that, no need for SSDs then. 4 disks in RAID 10 for the database would give a bit of leeway.
Avatar of NytroZ

ASKER

that is an average but it spikes to nearly 1000.  We are going to run a small stress test shortly and get some better data.
>>First, What do you mean by "VMware is not supported"?

They have a Support note on this:
Support Position for Oracle Products Running on VMWare Virtualized Environments (Doc ID 249212.1)

In a nutshell:  If it isn't a known problem or it cannot be reproduced on a physical server, no support.
Avatar of NytroZ

ASKER

Correction....the average is 1000 IPS and will spike up 1500 IOPS.
Using 15K = 185 IOPS you'll need 4 disks to read 750 IOPS and 8 more to write 750 IOPS since each write goes to 2 disks in RAID 10 = 12 * 15K 146GB

Using Dell R720XD pricing 12 * 147GB 15k = $4311 + $200 for the backplane upgrade to support that many SFF disks.

Using a pair of 400GB, SSD SATA Value MLC 3Gbps instead that's $3464, not sure you will get away with the cheap MLC SSDs or not, as you say the higher priced SAS SLC SSDs would be a bit more expensive than the traditional spinning disk option.

I haven't covered the OS and redo log disks in the above, nor the backup which I would prefer were in a separate server but that's down to how you backup your other machines, you might have a central backup server already.

Choice of CPU is difficult with Oracle's per-core rather than per-processor licensing, normally you'd want 4 or more cores but the 3GHz dual core E5-2637 will keep that cost down although that chip costs more than a slower clocking quad or 8 core chip. Not sure if disabling cores in BIOS lets you claim two cores active on a 4 core chip with Oracle.

RAM's so cheap nowadays you may as well add plenty, 16GB sticks are cheapest per-GB although with several smaller ones you get more memory bandwidth.
>>Not sure if disabling cores in BIOS lets you claim two cores active on a 4 core chip with Oracle.

I'm not either.  Oracle changes licensing all the time.  I remember at one point, disabling didn't matter but they may have relaxed this.  Only Oracle Support or Sales can confirm what is or is not allowed.
Avatar of NytroZ

ASKER

Why recommend the 146GB disks over the 300GB disks?  How many more IOPS can a SSD drive handle vs the 15K SAS drives?  

Would the following specs be acceptable allowing for growth?

OS - 2 146GB 15K SAS drives RAID 1
Redo logs- 2 146GB 15K SAS  drives RAID1
Database- 4 400GB SSD RAID10
Backup-sent to backup server
ASKER CERTIFIED SOLUTION
Avatar of Member_2_231077
Member_2_231077

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of NytroZ

ASKER

What is considered high regarding disk latency?
Latency is somewhere between 10-20ms on a hard disk, and about 1ms on a SSD for writes.

BTW, you can probably use RAID 5 with the SSDs.