Link to home
Start Free TrialLog in
Avatar of Juansy
Juansy

asked on

App running slow in virtualized clustered server

I'm in the process of moving from SBS2008 to 2012 R2 Virtualized and Clustered network.   The cluster is up with no errors.  I have a virtualized 2012 R2 server thats sole purpose is to run this simple app (basis/Vpro database).  

The problem I have is that the app runs dog slow.  The app runs fast if it's a virtualized desktop running on the same virtual switch.  But anything outside that virtual switch is real slow.  It's not slow accessing the server outside that virtual switch, it's just slow running the app.  I was hoping for some suggestions to help troubleshoot this.  Thanks.
Avatar of sweetfa2
sweetfa2
Flag of Australia image

Three main things will cause performance issues.

CPU allocation
Memory Allocation
IO Allocation

Depending on what your application does and how your virtual clusters are allocated can affect things.

Can you get performance metrics from the virtual server running the app and see firstly if the machine itself appears to be labouring.

If it is labouring address the issues.  If it is not, then investigate how the resources are allocated in your virtual environment.

You should be able to get some diagnostics from your environment about what is using it.  I am not personally familiar with the 2012 R2 setup so cannot speak with certainty.
It seems as a network problem between the Hyper-V host and the physical switch.

There could be several reasons for that. Start by doing a simple large-file-copy test between the virtual server and a physical client and between the Hyper-V host and the same client (you will have to allow the management traffic on the virtual switch).

You could try disabling/enabling IP and TCP offload on the physical Hyper-V host adapter and/or the virtual one.
Avatar of Steve
What's your network design, with relation to the hyper-v cluster (number of NICs, teaming method etc) and how is the client connected with respect to the hyper-v cluster (ie on the same switch, different switch etc)
Avatar of Juansy
Juansy

ASKER

I took Svet's suggestion and did a large file copy.  I took a 1.4 gb iso image and it copied to old server and then copied image to new virtual server.  I was exactly twice as fast copying to old server.  So, my issue isn't with the app.  

On my best practices/checklist when creating the failover cluster it said to disable the chimney offload and I did.

My network design is 3 Broadcom dual port nics.  The first nic uses both ports for the iscsi storage.  The second nic has both ports teamed and addressed for the network (192.168.0.0.-Cluster and Client traffic).  The third nic has both ports teamed for the Virtual Switch( 10.0.200.0. Cluster only).

Jumbo Frames are enabled on the isci storage.  

Things I've done are.  Increase the ram and cpu with no change.  I created a virtualized desktop on both nodes and here are some results.

Regular desktop to Old server=  100mb/sec
Regular desktop to new virtual server=  44mb/sec
Virtual desktop to new virtual server on same node=100mb/sec
Virtual desktop to new virtual server on different node=10mb/sec.

I'm open for any suggestions and appreciate everyones help and input.  Thanks.
Avatar of Juansy

ASKER

I also tested the app on Virtual Desktop on same node as virtual server and the app worked as intended.

So network to node = slow.  
Node to node= even slower.
Yes, it looks like a network-related misconfiguration to me.

Did you run the test between the Hyper-V host and a physical desktop?

Are you using a dedicated adapter for the virtual network traffic or you are sharing it with the management traffic?

Another point to verify: Jumbo frames must be enabled on all network devices in the path.

You could also run a network analyser and look for dropped or fragmented packets. The following link could help you to find if the MTU size is an issue http://www.tp-link.com/CA/article/?faqid=190
Avatar of Juansy

ASKER

Physical Desktop to Host is very good.  The large file copy was very fast.

There's no management traffic on the virtual switch.  One teamed network adapter(2 ports) for virtual switch.

Jumbo frames are enabled on all adapters but I'm only able to successfully ping a large packet between hosts and storage.  Anything over 1472mtu is fragmented otherwise.  I have jumbo enabled on all adapters and set at 9014.  The physical switch supports Jumbo frames.

I will look into Network analyzers.
Jumbo frames should not be enabled on the adapters participating in the virtual switch dedicated to the client server connections. They should be enabled on the iSCSI dedicated adapters only. It’s OK that you cannot ping with bigger frame from the physical client.

Another possible issue is the teaming. Did you try creating a virtual switch from a single adapter? Also, which teaming did you use, Windows-Server-2012-based or vendor-driver-based? I know that Windows Server 2012 now supports teaming natively but I don’t have experience with it. On Windows Server 2008, vendor-driver-based teaming works transparently for the OS. On Windows Server 2012 R2, teaming can be done into the virtual machine as well. So, I would tests different teaming scenarios to see which one works.

More about teaming in Windows Server 2012 R2:
http://technet.microsoft.com/en-us/library/hh831648.aspx
http://www.microsoft.com/en-us/download/details.aspx?id=40319
http://www.petri.co.il/create-nic-team-virtual-switch-for-converged-networks.htm
Avatar of Juansy

ASKER

Regarding the teaming.  I created a nic team for the network and a nic team for the vswitch.  I used the nic teaming in Windows 2012 r2.   I thought that would add another level of redundancy and since all the nics I orderred came broadcom dual ports I figured why not.
Yes, teaming the network adapters is always recommended for performance and redundancy. But, it can be crated on different levels – with the physical adapters or in the virtual machine. However, they should not be mixed, like teaming virtual adapters (virtual switches in 2012) created from teamed physical adapters.

For testing purposes I would test the connectivity with a virtual switch created from a single physical adapter.
Avatar of Juansy

ASKER

I disabled the nic teaming.  I have two nics for the storage, 1 nic for Production, 1 nic for Heartbeat.  All on different subnets.  Still hasn't improved performance.  Thanks for your help.
ASKER CERTIFIED SOLUTION
Avatar of Juansy
Juansy

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Juansy

ASKER

Problem solved-rolled back the drivers for Broadcom dual port nics.