Solved

HW recommendation

Posted on 2004-08-04
12
893 Views
Last Modified: 2006-11-17
Hi,

we plan to build ASE server for OLTP application and we need really quick response time ( 100-300 ms). Database will be quite small (less than 2GB) , as well as number of active connections (less than 5). In fact, it will be kind of automatic quotation machine - there's inflow of information about situation on market and we have to react to this situation very quickly. All the logic is in stored procedures.

We have limited budget, so we are limited to Intel.

I have prepared some HW specification:
2 x Xeon 3.2 GHz with 1 MB Level2 cache
2 GB of RAM
SCSI Ultra320 RAID adapter with at least 256 MB cache (preferably 512 MB)
4 x 36 GB 15krpm Ultra320 SCSI disks with 8 MB cache configured as RAID 10 (e.g. strip + mirror)

We will use Red Hat EL 3.0 and ASE 12.5.2 for Linux. I plan to assing 200 MB for tmpfs device for tempdb, 200 MB for OS and remaining RAM (cca 1.6 GB) for ASE.

I think, that it will be quite quick, but I wonder, if it will be as quick as possible with given limits.

So my question is: Is this the optimal setup for small quick OLTP server on Intel architecture ? Would you recommend any changes ? More ( or less ) RAM, disks, cache on RAID adapter ? Quicker disk subsystem ? Different RAM allocations ?

Jan Franek
0
Comment
Question by:Jan_Franek
  • 5
  • 4
  • 2
  • +1
12 Comments
 
LVL 13

Assisted Solution

by:alpmoon
alpmoon earned 50 total points
ID: 11714924
I think the most critical part is memory considering that you have fast CPU's. If the active part of the database is more than 1 GB, I suggest more memory. You will need data cache for tempdb as well and you can't afford disk IO when you need such a speed. You may estimate your procedure cache size and memory needed for server structure as well.
0
 
LVL 15

Assisted Solution

by:jdlambert1
jdlambert1 earned 50 total points
ID: 11717284
I concur with alpmoon -- put any extra bucks into RAM until you hit the motherboard/OS's max. You might also consider using IDE RAID instead of SCSI RAID -- not only can you save money, you can actually get better performance. 3Ware's IDE RAID controllers were created by ex-Adaptec engineers who wanted to maximize the efficiency of the data channels as well as the drive management, and they've done a great job. (See www.3ware.com)
0
 
LVL 14

Author Comment

by:Jan_Franek
ID: 11717780
Thank you both,

my first estimate was, that active part of database is about 500 MB, so 1.6 GB should be enough for all the caches (data, log, tempdb, procedure), but I'll try to check this estimate.

Do you have some experiance with that Serial ATA RAID ? I took a look at it and I'm afraid of the small cache. I expect, that almost all disk I/O will be caused by log write (or I hope so, because I'd like to keep all the data and tempdb in memory). So, the disk subsystem should be as quick as possible for write operations. And I'm afraid, that small cache will affect this negatively. I'm aware, that writing log through cache is quite dangerous, but the cache on RAID adapter will be backed-up by it's own battery.
0
 
LVL 15

Expert Comment

by:jdlambert1
ID: 11717909
As to RAM, it's relatively inexpensive, and if you underestimate, or if conditions change later, too little RAM can be a major problem.

I've used Serial ATA RAID in a development environment, with occasional huge spikes in activity, with excellent performance, but I wasn't analyzing bottlenecks, so I don't know how the disk I/O compared with the rest of the components.
0
 
LVL 19

Accepted Solution

by:
grant300 earned 400 total points
ID: 11722584
Your disk configuration may not be optimal for what you are trying to accomplish.

Without knowing what your application characteristics are (
 - Read vs Write ratio
 - Complexity of the queries
 - Complexity of the stored procs doing the inserts/updates
 - Quality of the datamodel
 - Appropriate indexes and other tuning )
it is a bit hard to know how the server will react; however there is a high likelyhood that the LOG writes will be the highest I/O bandwidth followed by tempdb.  You have mitigated the tempdb I/O issue a good bit by placing it on a tmpfs file system.

The LOG I/O is another problem all together and I am not sure that 4 drives in a RAID 10 configuration is adequate:.
- Having an even number (and a power of two) drives in the stripe set leads to hot spots because the database device data
  structures are built that way.  You are likely to hit the first mirror in the stripe harder than the second.
- The chunk size on most (but not all) of the RAID controllers is fairly large, e.g. 512KB, which presumes you are using striping
  to optimize large serial transfers.  With lots of small transactions, your log writes are likely to be substantilly smaller than that.
  In my experience, setting the chunk size to the size of one track on the disk is about optimum.
- You are going to have periods during checkpoint operations where you will experience higher log write latency because of the
   competition for the drives.

RAID 10 is ideal for databases IF you have a much larger database than the 2-3GB you are talking about AND you have separate devices for the logs and maybe the indexes.

I would consider the following:
- Run the 4 drives as 2 RAID 1 pairs
- Put the log(s) on the one pair and the data on the other
- Stick with Ultra-320 15K rpm drives
- Make sure you use a dual channel SCSI controller.  The Adaptec 39320A-R is a good choice if you are on a budget.  The Adaptec 2200S
  is more expensive but it can be fitted with a battery backup for the cache memory which might save your but one day.  It is twice the
  money though.  Most of the dual channel controlers are optimized to split the reads between the two drives in a mirrored set, not to
  mention avoiding latencies on the bus.
- Make sure your server has a PCI-X slot for the RAID controller and that you are using a 64bit 133Mhz slot and SCSI/RAID board.
- Set up the ethernet on the box to run at 100Mhz, NOT 1Gb.  The Gb ethernet is very resource hungry and will tend to use a lot of CPU
   at a high priority, even if it is in short bursts.  You don't want the database competing with the network any more than it has to and,
   you really won't be seeing a performance penalty for small transactions.  Most of them will fit in one network packet (1500 bytes), which
   has a transmission latency of 15 MICROseconds.!!!!  The benchmarks on TCP/IP network processing overhead show that you will use
   about 1Ghz of Intel processor to drive a 1Gb ethernet NIC.  If you just have to have 1Gb ethernet, look at one of the new TOE cards.
   TOE stands for TCP/IP Offload Engine.  They put most of the IP protocol smarts right on the card so the processor doesn't get loaded down.
- Get a 3rd pair of drives to use for the O/S, software, swap space, etc.
- Seriously consider going to a dual Opteron machine instead of the Xeons.  You are still running 32bit code but the operating
  environment is better suited to this kind of stuff.  Unfortunately, I have not seen any Opteron/32bit app vs Xeon benchmarks yet.
  It is just a gut feeling.
- Don't bother with more than 4GB of memory in the box because you are still running a 32bit operating system.  No process can access
  more than about 2.5 or 3GB no matter what.  Sybase has a single chunk of shared memory which can't be bigger than the limit.  My
  experience on LINUX has been that ASE does not rush out and try to use all the memory you have configured it for.  I suspect your
  memory plan will be about right.  You may even want to reduce the amount of memory that Sybase gets by a couple hundred MB if you
  find that the O/S is paging.
- Use raw partitions for the Sybase devices.  Yes, I know.  It is a great new feature to be able to use filesystem devices with
   the asynch I/O option turned on.  Unfortunately, I have had less than good success on LINUX with this.  It seems to work
   fine on Solaris so go figure.  With your small DB size and a decent amount of memory, I would not worry about missing out on the
   file system caching advantage.  You can still turn asynch I/O on and the bulk of the writes are through controller and/or drive cache anyway.
- Drop the change and get either RedHat ES-3 or Novell/SUSE 8ES.  I had a client try for months (against my advice) to get by with the
  stuff (RedHat) that they could get for free and there were all kinds of stability issues.  Bottom line, pay for the operating system
  that Sybase is actually certified on.  Personally, I like SUSE because they are not trying to hold you up with a draconian "service contract"
  and, for what it is worth, IBM has pretty much switched to the SUSE horse completely for there industrial strength servers.
- Partition the drives into three paritions and place the highest traffic data on the middle one.  This is a bit obscure and requires
  a little explanation.  What you are trying to do is take advantage of the sweet spot in the middle of the head assemblies travel.
  If you can keep the heads in the middle most of the time, the average access time drops somewhat.  If you can get it to go from
  say 7ms to 5.5ms, you have improved the performance of the slowest portion of the system by better than 20%.  What I ususally do is
  create slice 1 as 40% of the drive, slice 2 as 20%, and slice 3 as 40%.  That puts the middle slice slightly outside the middle because
  there are more sectors on the outer tracks than the inner but we won't split hairs.  On the Log drive pair, obviously you are going to
  but the Sybase raw device on Slice 2.  For the data drive pair, create a device on slice 2 and put the indexes there.  Put the data on
  slice 3 since you don't have that much and it will pack in at the end of Slice 2 instead of at the outter track of the drive.


All of this assumes that the active data AND, more importantly, the indexes have enough elbow room to stay in memory.  You can manage this to a great extent with named caches and similar tricks.  You want to seperate the index and data caches since you have enough memory to keep all the indexes in memory.  You don't want data pages to push index pages out.

BTW, don't fall in love with the common advice to make the primary key on each table a clustered index.  Unless you are always adding new rows at the end of the index AND all of your updates can be done in-place, reorganizing a clustered index can lead to high and/or inconsistant latencies.  It also makes it impossible for you to split the index and table into different named caches.

Don't but ANYTHING else on the server.  Period.  If you are worried about consistant latencies and you can't afford more box, keep the other stuff like the report writer, NFS server, print service, etc., off the box.  Buy a cheap box to use for that kind of thing.

DO put the application that receives the transactions and does the updates on the server if you can, BUT, code it as effeciently as possible.  If it is a real hog, get it off there and put it on a private network with the server.  Just a NIC in each one and a crossover cable.

Hope that helps,

Bill
0
 
LVL 19

Expert Comment

by:grant300
ID: 11726306
One more thing I forgot.

Do not put tape or CD/DVD devices on the Ultra320 SCSI channels.  The busses run at the spead of the slowest device on the bus so your very high speed disks will get throttled down if you do.

Bill
0
Threat Intelligence Starter Resources

Integrating threat intelligence can be challenging, and not all companies are ready. These resources can help you build awareness and prepare for defense.

 
LVL 14

Author Comment

by:Jan_Franek
ID: 11734797
Thank you Bill, your answer look is very valuable, I'll try your suggestions. I have a question about RAID adapter - would you prefer 64 bit / 133 MHz PCI with 256 MB of cache or 64 bit / 66 MHz PCI with 512 MB of cache ?
0
 
LVL 19

Expert Comment

by:grant300
ID: 11735962
Good guestion.

During "normal" operation it probably does not make a difference.  I am guessing you will see the advantages of the 64x133Mhz adaptor when you do data loads, reporting, backups, etc.  Things that are going to require sustained I/O.  You have very fast drives on two channels so, in theory ;-) you can use the additional bus bandwidth.

Which adaptors are you considering?

Bill.
0
 
LVL 14

Author Comment

by:Jan_Franek
ID: 11736181
I have offers from HP (Smart Array 6402 - PCI-X 133 MHz with 256 MB cache), IBM (ServeRAID 6M - PCI-X 133 MHz with 256 MB cache) and ICP Vortex (GDT8x24RZ - PCI-X 66 MHz with 512 MB cache).
0
 
LVL 19

Expert Comment

by:grant300
ID: 11738820
O.K., good.  You are looking at serious caching controllers.

Take a look at <http://www.tweakers.net/benchdb/test/80>.  They have benchmarks on a bunch of different controllers

I would pass on the HP.  The benchmarks show it to be no better than half as fast as the LSI MegaRAID.  I bet it is more than half the money.

I don't know anything about ICP Vortex and, as such, I would be concerned about the potential support and driver availability issues.

The two I would look at are the IBM ServeRAID 6M W/256MB and the LSI MegaRAID 320 2X with 256MB to the list.  The LSI can be configured with 128, 256, or 512MB or cache and the IBM can be bumbed up to 512MB.  The IBM comes with battery backup built in and you can get a battery backup option module for the LSI.

I have not found any benchmarks on the IBM so that one is kind of a crap shoot.  My recent experiences with IBM have been pretty good as of late and they do build some pretty high performance and scalable x86 servers these days.

The LSI MegaRAID benchmarks best of anything tested on the tweakers site.  Driver support is there and they are widley used by folks like Dell.  Were it me, I would buy the LSI.

BTW, what server hardware are you looking at?  I am curious because so many of the "servers" on the low end come with somekind of RAID controller embedded or as a board option.

Bill

0
 
LVL 14

Author Comment

by:Jan_Franek
ID: 11751552
Hi Bill,

thanks for link, it's interesting even I can't read Dutch (or whatever language it is). It's just pity, they didn't test comparable adaptors - for example test of MegaRaid 320-2x (64bit/133MHz) and MegaRaid 320-2 (64bit/66MHz) with the same cache should show benefit of wider PCI bus. However, from high performance of that 66MHz MegaRaid 320-2 with 256 MB of cache it seems, that the benefit of wider PCI bus is not too big and it's probably the size of the cache that matters more.

According to http://www.adaptec.com/worldwide/company/pressrelease.html?sess=no&prodkey=06112003&language=English+US ICP Vortex is subsidiary of Adapter (and was subsidiary of Intel). I did some research about availability and support of these controllers in Czech Republic. I was warned about problem with LSI support, it seems, they have just distributor here and no local support. ICP Vortex and IBM seem to have no problem with support.

Currently, we have 3 candidates for server HW - IBM xSeries 235, HP Proliant ML370T03 and Dell PowerEdge 2600 - each with 2 x Xeon 3.2 GHz with 1 MB L3 cache, 2 GB of RAM, redundant power supply etc.

OS will be Red Hat Enterprise Linux 3.0 ES and it will be dedicated database server with one of the data processing applications siting right on it and second one will be away (third party SW running only on Windows)

I will do some research about that TOE network adapters.

Thank you for your valuable help.

jano

0
 
LVL 19

Expert Comment

by:grant300
ID: 11753990
FWIW, I would take the IBM xSeries 235 with the IBM RAID adaptor.  One source for the products, pre-certified to work together, and one throte to choke if something doesn't work.  Also, it is pretty hard to beat IBMs support around the world.

Best of luck,

Bill
0

Featured Post

Highfive Gives IT Their Time Back

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

Join & Write a Comment

In this article, I will show you HOW TO: Suppress Configuration Issues and Warnings Alert displayed in Summary status for ESXi 6.5 after enabling SSH or ESXi Shell.
Is your Office 365 signature not working the way you want it to? Are signature updates taking up too much of your time? Let's run through the most common problems that an IT administrator can encounter when dealing with Office 365 email signatures.
Excel styles will make formatting consistent and let you apply and change formatting faster. In this tutorial, you'll learn how to use Excel's built-in styles, how to modify styles, and how to create your own. You'll also learn how to use your custo…
In this tutorial you'll learn about bandwidth monitoring with flows and packet sniffing with our network monitoring solution PRTG Network Monitor (https://www.paessler.com/prtg). If you're interested in additional methods for monitoring bandwidt…

747 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

11 Experts available now in Live!

Get 1:1 Help Now