Go Premium for a chance to win a PS4. Enter to Win

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 982
  • Last Modified:

App running slow in virtualized clustered server

I'm in the process of moving from SBS2008 to 2012 R2 Virtualized and Clustered network.   The cluster is up with no errors.  I have a virtualized 2012 R2 server thats sole purpose is to run this simple app (basis/Vpro database).  

The problem I have is that the app runs dog slow.  The app runs fast if it's a virtualized desktop running on the same virtual switch.  But anything outside that virtual switch is real slow.  It's not slow accessing the server outside that virtual switch, it's just slow running the app.  I was hoping for some suggestions to help troubleshoot this.  Thanks.
0
Juansy
Asked:
Juansy
1 Solution
 
sweetfa2Commented:
Three main things will cause performance issues.

CPU allocation
Memory Allocation
IO Allocation

Depending on what your application does and how your virtual clusters are allocated can affect things.

Can you get performance metrics from the virtual server running the app and see firstly if the machine itself appears to be labouring.

If it is labouring address the issues.  If it is not, then investigate how the resources are allocated in your virtual environment.

You should be able to get some diagnostics from your environment about what is using it.  I am not personally familiar with the 2012 R2 setup so cannot speak with certainty.
0
 
Svet PaperovCommented:
It seems as a network problem between the Hyper-V host and the physical switch.

There could be several reasons for that. Start by doing a simple large-file-copy test between the virtual server and a physical client and between the Hyper-V host and the same client (you will have to allow the management traffic on the virtual switch).

You could try disabling/enabling IP and TCP offload on the physical Hyper-V host adapter and/or the virtual one.
0
 
SteveCommented:
What's your network design, with relation to the hyper-v cluster (number of NICs, teaming method etc) and how is the client connected with respect to the hyper-v cluster (ie on the same switch, different switch etc)
0
Get free NFR key for Veeam Availability Suite 9.5

Veeam is happy to provide a free NFR license (1 year, 2 sockets) to all certified IT Pros. The license allows for the non-production use of Veeam Availability Suite v9.5 in your home lab, without any feature limitations. It works for both VMware and Hyper-V environments

 
JuansyAuthor Commented:
I took Svet's suggestion and did a large file copy.  I took a 1.4 gb iso image and it copied to old server and then copied image to new virtual server.  I was exactly twice as fast copying to old server.  So, my issue isn't with the app.  

On my best practices/checklist when creating the failover cluster it said to disable the chimney offload and I did.

My network design is 3 Broadcom dual port nics.  The first nic uses both ports for the iscsi storage.  The second nic has both ports teamed and addressed for the network (192.168.0.0.-Cluster and Client traffic).  The third nic has both ports teamed for the Virtual Switch( 10.0.200.0. Cluster only).

Jumbo Frames are enabled on the isci storage.  

Things I've done are.  Increase the ram and cpu with no change.  I created a virtualized desktop on both nodes and here are some results.

Regular desktop to Old server=  100mb/sec
Regular desktop to new virtual server=  44mb/sec
Virtual desktop to new virtual server on same node=100mb/sec
Virtual desktop to new virtual server on different node=10mb/sec.

I'm open for any suggestions and appreciate everyones help and input.  Thanks.
0
 
JuansyAuthor Commented:
I also tested the app on Virtual Desktop on same node as virtual server and the app worked as intended.

So network to node = slow.  
Node to node= even slower.
0
 
Svet PaperovCommented:
Yes, it looks like a network-related misconfiguration to me.

Did you run the test between the Hyper-V host and a physical desktop?

Are you using a dedicated adapter for the virtual network traffic or you are sharing it with the management traffic?

Another point to verify: Jumbo frames must be enabled on all network devices in the path.

You could also run a network analyser and look for dropped or fragmented packets. The following link could help you to find if the MTU size is an issue http://www.tp-link.com/CA/article/?faqid=190
0
 
JuansyAuthor Commented:
Physical Desktop to Host is very good.  The large file copy was very fast.

There's no management traffic on the virtual switch.  One teamed network adapter(2 ports) for virtual switch.

Jumbo frames are enabled on all adapters but I'm only able to successfully ping a large packet between hosts and storage.  Anything over 1472mtu is fragmented otherwise.  I have jumbo enabled on all adapters and set at 9014.  The physical switch supports Jumbo frames.

I will look into Network analyzers.
0
 
Svet PaperovCommented:
Jumbo frames should not be enabled on the adapters participating in the virtual switch dedicated to the client server connections. They should be enabled on the iSCSI dedicated adapters only. It’s OK that you cannot ping with bigger frame from the physical client.

Another possible issue is the teaming. Did you try creating a virtual switch from a single adapter? Also, which teaming did you use, Windows-Server-2012-based or vendor-driver-based? I know that Windows Server 2012 now supports teaming natively but I don’t have experience with it. On Windows Server 2008, vendor-driver-based teaming works transparently for the OS. On Windows Server 2012 R2, teaming can be done into the virtual machine as well. So, I would tests different teaming scenarios to see which one works.

More about teaming in Windows Server 2012 R2:
http://technet.microsoft.com/en-us/library/hh831648.aspx
http://www.microsoft.com/en-us/download/details.aspx?id=40319
http://www.petri.co.il/create-nic-team-virtual-switch-for-converged-networks.htm
0
 
JuansyAuthor Commented:
Regarding the teaming.  I created a nic team for the network and a nic team for the vswitch.  I used the nic teaming in Windows 2012 r2.   I thought that would add another level of redundancy and since all the nics I orderred came broadcom dual ports I figured why not.
0
 
Svet PaperovCommented:
Yes, teaming the network adapters is always recommended for performance and redundancy. But, it can be crated on different levels – with the physical adapters or in the virtual machine. However, they should not be mixed, like teaming virtual adapters (virtual switches in 2012) created from teamed physical adapters.

For testing purposes I would test the connectivity with a virtual switch created from a single physical adapter.
0
 
JuansyAuthor Commented:
I disabled the nic teaming.  I have two nics for the storage, 1 nic for Production, 1 nic for Heartbeat.  All on different subnets.  Still hasn't improved performance.  Thanks for your help.
0
 
JuansyAuthor Commented:
After much frustration I decided to rollback the drivers for the Broadcom nic.  I downloaded January 2013 driver and problem solved.  So current Broadcom nic drivers cause slow network on 2012 R2 Hyper V cluster and there's nobody at Dell or Broadcom that will admit it.
0
 
JuansyAuthor Commented:
Problem solved-rolled back the drivers for Broadcom dual port nics.
0

Featured Post

Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now