Solid State Drives Vs RAM - scaling a Business Intelligence Application

Hi all,

We have a business intelligence application and its very memory intensive. Currently we have 256 GB of RAM on the server and we are planning to increase the RAM to 1 TB. But the other option that we are thinking is to use SSDs instead of RAM. So that even when it swaps the performance is not bad. Any thoughts on

Can SSDs be used in place of RAM

thanks
-amit
anshumaEngineeringAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

sdernCommented:
I understand your train of thought, but SSDs are not the same as RAM (memory). Although SSDs do perform better than Disk Drives for Random Access. If you were to use SSDs you would notice a drop in performance. But I'd imagine that your application doesn't need to have 1 TB of memory to operate, although the more memory then the less swapping that would be required and so you will see an increase in performance. Are you developing this application?
0
bill30Commented:
SSD's depending on what you have may read faster, but write slower than a scsi drive.  Where you really get the performance boost from SSD's is during the spinup time taken with a normal Hard Drive.   Solid state drives in essence are like having a hard drive running at 100% speed at all times since there is no spinning.

Running 2+ SSD's in a RAID0 gets you a lot more performance out of them.
Obviously no fault tolerance, but you will get some kickbutt speeds doing it.
http://hothardware.com/newsimages/Item7823/iometer-intel-ssd-raid0.png

Here is an article talking about using a Hybrid system, part RAM for the highly accessed files in small size, part SSD for highly accessed files in larger size, and then scsi for the rest.  They also empliment a caching system to help with the load.
http://code.google.com/p/cachefs/
0
Hutch_77Commented:
Everything said above is accurate, but one thing is left out.  Part of the issue is the SATA Bus Speed.

An idea to look at if your server can take it is ass PCIE SSD's and raid them together.

You can then put a bigger swap file on the ssd's which will help with some of the issues, but it will not take place of the ram, but it will load basic files as said in spinup you will see an increase.  

If it is that intensive I think the better solution is to start distributing the application in a Cluster to take some of the load away.
0
Powerful Yet Easy-to-Use Network Monitoring

Identify excessive bandwidth utilization or unexpected application traffic with SolarWinds Bandwidth Analyzer Pack.

LMiller7Commented:
There is no substitute for RAM.
Using SSD's for disk storage would improve the speed of paging and that would improve performance. Adding more RAM would reduce paging and improve performance. There are no easy answers as to which would be the most beneficial. This would depend on the performance characteristics of your hardware, the nature of your data and how it was accessed, and more. An analysis of your performance data help here. I don't expect this would be simple.
0
Hutch_77Commented:
@LMiller hit the nail on the head
0
anshumaEngineeringAuthor Commented:
Cool,

well the application is already clustered but then its load balancing is really stupid. It balances on the number of jobs running on each node but not at the type of the job.

So a cube publication job that requires lot of ram processing sometimes goes to a node which is already processing another cube and hence gets blown up on memory while the other two nodes which still have plenty of memory left are still processing more number of puny jobs. We are working with the vendor to fix that but till then we will not be able to expand the number of Cubes we have.

Here are some hardware characteristics, Sun X4600s with 256 GB of RAM in a 3 Node Cluster. During peak days of the week (sunday,monday,tuesday) out of 3 nodes two nodes reach about 256 GB and always remain on the edge of being shut down. Then the memory gets freed on wednesday once the Cube processing is done and peak loads are down.

My manager is saying that he doesn't want to buy memory for peaks as its a very expensive ask. But from my experience of the application I know that the more you expand its usage the more RAM it needs.

Since he's against buying more RAM, I am thinking of trying SSDs. By the way how do you do capacity planning for memory. You go by peaks or by average ???
0
Hutch_77Commented:
I go by average but keep peaks in mind.  If a peak is steady for a long time at that point it becomes the place to have more coverage.

If you are pegging 256 GB for 3 days straight it is time to add more which is insane.

I don't think SSD's will help in this instance nearly as much as you would like.  
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
anshumaEngineeringAuthor Commented:
thank you all
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Server Hardware

From novice to tech pro — start learning today.