Link to home
Start Free TrialLog in
Avatar of Arie Lavi
Arie LaviFlag for Israel

asked on

SQL preformance and configuration issue

Hi,
I have an HP Gen 9 server with 8 1.2T 10k sas disk (Local) and VMware installed on internal SD card, all disks are currently in RAID 10.
We're having an issues with our SQL server preformance and from what I've read and understand we need to separate the locations of the DB files from the log files, the problem is that even if I'll create a new drive it will still be on the same disk array so it's the same as putting them on the same drive.
My question is this :
Should I break the RAID and create new RAID 10 based on 4 drive each and then put the DB file with the OS on Data store 1 and create a new drive for logs / tempDB and put it on Data store 2 ?
Or do you have any other suggestions?
I can't replace the drive s to SSD because of the price is too high at the moment.
Any help will be appreciated.

Thanks in advance
Avatar of Member_2_231077
Member_2_231077

There is a reasonably cheap option to add 724864-B21 2xSFF drive kit in the back, but you still need a couple of disks for it and an additional RAID controller would knock the price up too much. There's also a 2 x M.2 SATA PCIe card that may help.

Need your exact spec to confirm since you need card slots etc. Presumably you have a proper RAID controller with the current 8 disks on so there is a free B140i fakeraid onboard to connect the disks to. fake/software RAID is fine for RAID 1 for the transaction logs,
Avatar of Bitsqueezer
Hi,

I think this is a typo - I don't believe that you use an SD card, probably it should be SSD drive or a M.2 card or something similar.

But independent of that: Yes, it is recommended to separate log files and data files to different physical drives just because two drives can move their heads independent of each other.

But just to find out if that really helps to find out your performance problem I would not reorganize the complete setup of the server. As you have a RAID system and SAS drives you already have a fast system as all drives can work at the same time. I would not bet that separating the logs to a different RAID would help you to solve performance problems.

Even if you logically separate the drive letters it would then use different areas of the RAID system for one and the other logical drive so it could help if you want to test that without big changes.

In general, I would first try to find out the exact reasons for performance problems. Normally the more often reasons are bad database table design, missing indexes, bad views/stored procedures/functions, bad usage of temp database or memory and many other well-known performance killers. Of course also the Windows system where the SQL Server is running should be monitored for a while to see if there is something else running on the server which makes it slow or kills resources the SQL Server needs (i.e. we had some "specialists" here which filled the disks with dump files so the server had not enough disk space for any operations left and so all running databases were brought to their knees). SQL Server Profiler is also a tool which can help to find out bottlenecks. Also it should be tested if the performance is slow in specific database operations, then these operations should be monitored and tested, SQL Server has the possibility to also show an execution plan and index creation tips - and many more such things.
So before you change the hardware design the database should be checked in deep. If that is clean and fast then also performance need to be tested between frontend and backend - like network speed, network bottlenecks, firewalls, but also frontend application servers like webservers or user's desktop computers.

There are soo many possible reasons for performance problems, these were only some of the most important ones. But before changing anything on hardware level, first step is always to find out the exact reason. You can sell your old car and buy a Lamborghini, so you think you have a very much better car. But if you still move it by pushing it manually it will not be faster than your old one. You need to find out how to unleash the power you already have.

Cheers,

Christian
It probably is an SD card, VMware fits nicely on them (although it's irrelevant to performance as using local disks implies just one VM so would be the same as having Windows natively on the 8 disk RAID 10 anyway). Some people throw in a VMware hypervisor on a SD card just because it only costs $20 and makes moving to a different platform so easy.

I would agree that performance tuning is better than just throwing a couple more disks at it for the logs or tempDB but it takes hours even for an expert to do that, the 2 x M.2 SATA card takes about 10 minutes for a child to fit.
is disk the bottle neck ?

(you could change your current RAID config, but that would then give less performance and IOPS per RAID Array datastore, than what you have currently, because less disks per RAID array, less IOPS)

Have a read of how to Architect a SQL DB on VMware vSphere ?

ARCHITECTING MICROSOFT SQL SERVER ON VMWARE VSPHERE® Best Practices Guide
Avatar of Arie Lavi

ASKER

I did read the Best practice guide and after reviewing the server activity for several days it seems the CPU and memory is not the issue, but I can see high latency in the disk reading, I'm sure our software department need to fix some tables and indexing issues but from the server side I can only see high latencies coming the local storage
What IOPS have the Software Department stated the DB needs ?

As Andy has stated above you may need to start using Flash/SSD based performance, and not spinning rust!
Hi,
First of all believe me I hate this spinning rust too.
Second, the problem is that the software Department doesn't really knows what causing the slowlines.
They just saying that "the server is slow and we've checked with the ssms and the queries are running very very slow
Which causing an impact on the users too." and they throwing the problem on the IT department.
The third problem is my budget, the management won't give me any budget until I'll show them any prove that it will solve the problem if we will upgrade the hardware of the servers. (U.s. dollars currency is pretty high in my country)
If I'll build a small SAN with raid-1 SSD on it and connect it to the Esxi host as an iSCSI with two network cards cuz I have only one gigabit network cards on the Esxi host just for a test would I be able to see any performance change? Cuz if so I will be able to go to the management and get a budget for upgrading the hardware.
ASKER CERTIFIED SOLUTION
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I'll listen to your advice to install a test environment on a bare metal server with RAID-1 to system and another RAID-1 for the tempDB / Logs and see what's happening. Thank you!
Thank you very much for your help!
Why would you use a SAN when you can mount a couple of cheap M.2 SATA SSDs on a PCIe card in the server?