Solved

App running slow in virtualized clustered server

Posted on 2014-01-08
13
965 Views
Last Modified: 2014-01-19
I'm in the process of moving from SBS2008 to 2012 R2 Virtualized and Clustered network.   The cluster is up with no errors.  I have a virtualized 2012 R2 server thats sole purpose is to run this simple app (basis/Vpro database).  

The problem I have is that the app runs dog slow.  The app runs fast if it's a virtualized desktop running on the same virtual switch.  But anything outside that virtual switch is real slow.  It's not slow accessing the server outside that virtual switch, it's just slow running the app.  I was hoping for some suggestions to help troubleshoot this.  Thanks.
0
Comment
Question by:Juansy
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
13 Comments
 
LVL 17

Expert Comment

by:sweetfa2
ID: 39767267
Three main things will cause performance issues.

CPU allocation
Memory Allocation
IO Allocation

Depending on what your application does and how your virtual clusters are allocated can affect things.

Can you get performance metrics from the virtual server running the app and see firstly if the machine itself appears to be labouring.

If it is labouring address the issues.  If it is not, then investigate how the resources are allocated in your virtual environment.

You should be able to get some diagnostics from your environment about what is using it.  I am not personally familiar with the 2012 R2 setup so cannot speak with certainty.
0
 
LVL 20

Expert Comment

by:Svet Paperov
ID: 39769506
It seems as a network problem between the Hyper-V host and the physical switch.

There could be several reasons for that. Start by doing a simple large-file-copy test between the virtual server and a physical client and between the Hyper-V host and the same client (you will have to allow the management traffic on the virtual switch).

You could try disabling/enabling IP and TCP offload on the physical Hyper-V host adapter and/or the virtual one.
0
 
LVL 27

Expert Comment

by:Steve
ID: 39770494
What's your network design, with relation to the hyper-v cluster (number of NICs, teaming method etc) and how is the client connected with respect to the hyper-v cluster (ie on the same switch, different switch etc)
0
10 Questions to Ask when Buying Backup Software

Choosing the right backup solution for your organization can be a daunting task. To make the selection process easier, ask solution providers these 10 key questions.

 

Author Comment

by:Juansy
ID: 39774761
I took Svet's suggestion and did a large file copy.  I took a 1.4 gb iso image and it copied to old server and then copied image to new virtual server.  I was exactly twice as fast copying to old server.  So, my issue isn't with the app.  

On my best practices/checklist when creating the failover cluster it said to disable the chimney offload and I did.

My network design is 3 Broadcom dual port nics.  The first nic uses both ports for the iscsi storage.  The second nic has both ports teamed and addressed for the network (192.168.0.0.-Cluster and Client traffic).  The third nic has both ports teamed for the Virtual Switch( 10.0.200.0. Cluster only).

Jumbo Frames are enabled on the isci storage.  

Things I've done are.  Increase the ram and cpu with no change.  I created a virtualized desktop on both nodes and here are some results.

Regular desktop to Old server=  100mb/sec
Regular desktop to new virtual server=  44mb/sec
Virtual desktop to new virtual server on same node=100mb/sec
Virtual desktop to new virtual server on different node=10mb/sec.

I'm open for any suggestions and appreciate everyones help and input.  Thanks.
0
 

Author Comment

by:Juansy
ID: 39776605
I also tested the app on Virtual Desktop on same node as virtual server and the app worked as intended.

So network to node = slow.  
Node to node= even slower.
0
 
LVL 20

Expert Comment

by:Svet Paperov
ID: 39776695
Yes, it looks like a network-related misconfiguration to me.

Did you run the test between the Hyper-V host and a physical desktop?

Are you using a dedicated adapter for the virtual network traffic or you are sharing it with the management traffic?

Another point to verify: Jumbo frames must be enabled on all network devices in the path.

You could also run a network analyser and look for dropped or fragmented packets. The following link could help you to find if the MTU size is an issue http://www.tp-link.com/CA/article/?faqid=190
0
 

Author Comment

by:Juansy
ID: 39777467
Physical Desktop to Host is very good.  The large file copy was very fast.

There's no management traffic on the virtual switch.  One teamed network adapter(2 ports) for virtual switch.

Jumbo frames are enabled on all adapters but I'm only able to successfully ping a large packet between hosts and storage.  Anything over 1472mtu is fragmented otherwise.  I have jumbo enabled on all adapters and set at 9014.  The physical switch supports Jumbo frames.

I will look into Network analyzers.
0
 
LVL 20

Expert Comment

by:Svet Paperov
ID: 39777589
Jumbo frames should not be enabled on the adapters participating in the virtual switch dedicated to the client server connections. They should be enabled on the iSCSI dedicated adapters only. It’s OK that you cannot ping with bigger frame from the physical client.

Another possible issue is the teaming. Did you try creating a virtual switch from a single adapter? Also, which teaming did you use, Windows-Server-2012-based or vendor-driver-based? I know that Windows Server 2012 now supports teaming natively but I don’t have experience with it. On Windows Server 2008, vendor-driver-based teaming works transparently for the OS. On Windows Server 2012 R2, teaming can be done into the virtual machine as well. So, I would tests different teaming scenarios to see which one works.

More about teaming in Windows Server 2012 R2:
http://technet.microsoft.com/en-us/library/hh831648.aspx
http://www.microsoft.com/en-us/download/details.aspx?id=40319
http://www.petri.co.il/create-nic-team-virtual-switch-for-converged-networks.htm
0
 

Author Comment

by:Juansy
ID: 39777661
Regarding the teaming.  I created a nic team for the network and a nic team for the vswitch.  I used the nic teaming in Windows 2012 r2.   I thought that would add another level of redundancy and since all the nics I orderred came broadcom dual ports I figured why not.
0
 
LVL 20

Expert Comment

by:Svet Paperov
ID: 39777716
Yes, teaming the network adapters is always recommended for performance and redundancy. But, it can be crated on different levels – with the physical adapters or in the virtual machine. However, they should not be mixed, like teaming virtual adapters (virtual switches in 2012) created from teamed physical adapters.

For testing purposes I would test the connectivity with a virtual switch created from a single physical adapter.
0
 

Author Comment

by:Juansy
ID: 39777851
I disabled the nic teaming.  I have two nics for the storage, 1 nic for Production, 1 nic for Heartbeat.  All on different subnets.  Still hasn't improved performance.  Thanks for your help.
0
 

Accepted Solution

by:
Juansy earned 0 total points
ID: 39780608
After much frustration I decided to rollback the drivers for the Broadcom nic.  I downloaded January 2013 driver and problem solved.  So current Broadcom nic drivers cause slow network on 2012 R2 Hyper V cluster and there's nobody at Dell or Broadcom that will admit it.
0
 

Author Closing Comment

by:Juansy
ID: 39791950
Problem solved-rolled back the drivers for Broadcom dual port nics.
0

Featured Post

Efficient way to get backups off site to Azure

This user guide provides instructions on how to deploy and configure both a StoneFly Scale Out NAS Enterprise Cloud Drive virtual machine and Veeam Cloud Connect in the Microsoft Azure Cloud.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Few best practices specific to Network Configurations to be considered while deploying a Hyper-V infrastructure. It may not be the full list, but this could be a base line. Dedicated Network: Always consider dedicated network/VLAN for Hyper-V…
The System Center Operations Manager 2012, known as SCOM, is a part of the Microsoft system center product that provides the user with infrastructure monitoring and application performance monitoring. SCOM monitors:   Windows or UNIX/LinuxNetwo…
There are cases when e.g. an IT administrator wants to have full access and view into selected mailboxes on Exchange server, directly from his own email account in Outlook or Outlook Web Access. This proves useful when for example administrator want…
Michael from AdRem Software explains how to view the most utilized and worst performing nodes in your network, by accessing the Top Charts view in NetCrunch network monitor (https://www.adremsoft.com/). Top Charts is a view in which you can set seve…
Suggested Courses

615 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question