Solved

Virtualization performance issues

Posted on 2013-01-02
15
518 Views
Last Modified: 2016-11-23
I haven't done a whole lot of hyper-v servers but the ones I have seem to be slow most of the time. If you monitor CPU time, the host and VMs are not doing anything but when you connect to them, typing, opening any windows, internet all just seem slow.   The host seems real fast until you have a couple of VM's running.

For instance, I have one that is a Dell poweredge R710, Xeon X5670 (2.93ghz), 24gb RAM, RAID 5. Seems like it should rock. Doesn't matter how many users are logged in. Mostly Word docs shared from a file server, one Exchange 2010 server, an accounting software server (not much use) and a Windows 7 VM for testing.  Each one has a dedicated NIC.
Sometimes its annoyingly slow just clicking around, sometimes it seems ok.

I have read about disabling the Offloading parts of networking and have done that. I know I should only have the Core server installation on the server but I like having the interface.

Can somebody enlighten me on this subject or point me to some good articles. I have done  a lot of googling but still don't feel like I'm doing everything right. It should be better than this, I hope?
0
Comment
Question by:DougPenneman
  • 6
  • 4
  • 4
  • +1
15 Comments
 
LVL 42

Expert Comment

by:kevinhsieh
ID: 38739185
Tell me more about your disks. How many disks? What type? Disk performance is usually the slowest part of any server. If you put four or five VMs on a set of disks that barely support a single server, perceived performance is going to really suck.
0
 

Author Comment

by:DougPenneman
ID: 38739340
RAID 5 consisting of 4 - 1tb SATA on a PERC H700 controller, 2 volumes.
The HOST OS and VMs (vhds) are all on the same volume, different partitions
0
 
LVL 8

Expert Comment

by:gsmartin
ID: 38739967
So calculating your expected IOPS for this configuration:

Note each SATA 7.2K RPM drive can yield up to a maximum of 100 IOPS per drive, but have a random read/write average of 75 IOPS and an minimum throughput of approx. 65 Mbps per drive.  4 x 75 = 300 IOPS (4 x 65 Mbps = 260 Mbps) Less 75 IOPS for RAID 5 overhead = a maximum of 225 IOPS (195 Mbps) shared between your Virtual Machines.  Note these numbers will vary up or down depending on 3Gb/s or 6Gb/s SATA (Type II or III) drives RAID controller bus speed.  RAID 10 will acheive a little better write performance.  The more drives/spindles in your array the better the performance as well as faster 15K drives will reduce capacity, will increase your performance greatly with an average of 180 IOPS per drive.  Other factors include buffer size and read/write latency.  These numbers vary upon drive manufacture as well as consumer vs enterprise grade drives.

When doing virtualization drive performance is one of your biggest concerns.  Enterprise (MLC or SLC) SSDs drives on fast enterprise class controllers can provide very significant performance increase in the 10s to 100s of thousands of IOPS per drive and RAID array; in some extreme cases over a million IOPS per PCIe SLC NAND Flash drive.

My ESX servers are a minimum of dual ix core CPUs 96 to 128GB of DRAM using 8Gb Fiber Channel connected to an Enterprise SAN.  The SAN has a minimum of 9 15K drives in a RAID 5 set or 8 in RAID 10; the overall SAN is comprised of 144 Tier 1 and Tier 3 drives (15K FC vs 7.2K SATA RPM drives).  My newer shared SAN is 6G/s SAS and 8G/s FC with Tier 1: SLC SSDs 50K IOPS/drive, Tier 2: 15K, RPM Tier 3: 7.2K RPM with over 96 drives virtualized accross the shared SAN.  Note this isn't much in comparison to most Enterprise class environments.

The bottom line is there are a number of variables that play into virtualuzation performance.  So you need to research and get a good understanding of all of the key areas (Memory, CPU, Disk IO, Network, controllers, etc...) and how to appropriately right-size and architect your virtualization environment.  

I hope this helps.
0
 
LVL 8

Expert Comment

by:gsmartin
ID: 38739978
Correction: PCIe NAND Flash drive RAID array with over a Million IOPS vs a single NAND Flash drive.
0
 

Author Comment

by:DougPenneman
ID: 38740033
It helps but what I really need is to know what should be built for a small company of about 35 users that needs file sharing (100gb), Exchange server and a Xactimate (SQL based program) server with remote access. Seems like my configuration should be fine. Not that demanding on IOPS I would expect.

Some of my questions are:
Should I use a mirror SSD for the host?
Do I need to use physical disks in RAID 1 for each VM instead of VHDs on the host Volume?
If each VM has a dedicated NIC, are there some configuration settings that should be changed such as disabling the offloading and chimney?
Should I even consider virtualization in the first place?
Am I in over my head?
0
 
LVL 8

Assisted Solution

by:gsmartin
gsmartin earned 100 total points
ID: 38740159
Personally, for 35 users I would not use virtualization.  You lose performance when you virtualize vs. dedicated hardware.  The benefits of virtualization is to reduce your physical server footprint, which is technically potentially more expensive depending in how use size it.  Personally, your configuration is not optimal for virtualization even for a small business.  

You won't really understand your IOPS requirements without doing some benchmarks first. You can use IOmeter to see what drive performance you are getting.

Personally, I would go with faster drives (at least 6 to 8 non-SATA) in a RAID 10 configuration plus a hot spare.  Your Hyper-V operating system should be on separate drives in a RAID 1 configuration (SSDs not required).  If you use SSDs I would recommend them only for the shared virtualuzation drive space perferrably in RAID 10 configuration.  This would provide a significant performance increase.  Also, have at leadt two CPUs with dedicated memory for each socket.  

Note with VMware ESX the difference in performance between RDMs and VMDKs is suppose to be minimal with Hyper-V I am not sure.  Personally, I prefer RDMs (Raw Device Maps) for larger or Database type LUNs/Partitions and VMDKs for smaller OS drives such as virtual machines C: drives shared on a single RAID 5/9 disk LUN.  SAN configuration: No more than 20 (VMDKs) virtual machines per 500GB LUN.  This will obviously translate a little differently for your environment since you are using Hyper-V, which doesn't yield the same performance than ESXi.
0
 
LVL 42

Accepted Solution

by:
kevinhsieh earned 100 total points
ID: 38740317
I would visualize most any workload for basically any sized organization. The real issue is how is performance from the end user perspective? Your users aren't interactively logged into the VMs, so a slight lag in UI isn't a problem. What you have done is taken a disk environment that is slower than what would be standard 10 years ago (10K SCSI vs. 7.2K SATA) and then piled several more servers onto those same disks.

Your solution is to add more/faster disks, which is the same thing you do when you have multiple servers with multiple drives each. If you ran 8 drives in RAID 10, your performance would be a lot better. I would NOT generally use separate drives for the parent partition. The reason is that those drives will be mostly idle and their IOPS will be unavailable to the VMs.
0
How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

 

Author Comment

by:DougPenneman
ID: 38740635
I will upgrade the hard drive array to the best the box will hold. Maybe if the budget allows I will add a second CPU and RAM.
0
 
LVL 42

Expert Comment

by:kevinhsieh
ID: 38740927
It is highly unlikely that you need more CPU. I run servers with up to 144 GB of RAM and older Xeon processors and my total CPU is less than 15% with 20+ VMs supporting over 600 users. You can certainly add RAM if you feel that your VMs would benefit from more RAM or if you plan on adding more VMs.
0
 

Author Comment

by:DougPenneman
ID: 38741106
Thank you.
As I first stated it seemed like CPU stays at 0% most of the time so how can it be slow?

So I'm still left to wonder why, if the servers aren't doing much, can the disk IO be that bad that slows everything down. Seems like it should be better than that to me.

Does Server 2012 hyper-v perform better?
0
 
LVL 42

Expert Comment

by:kevinhsieh
ID: 38742019
A system can have the disk at 100% and the CPU be running at 1%, and the system will seem like a dog for anything that requires disk IO, such and clicking on the Start button. Your VMs will also generate a lot more write IO than you expect - I think that it's part of Windows keeping everything up to date in terms of system logs, event logs, NTFS updates, etc. You can prevent read IO by adding RAM and then caching the data, but every write IO goes to disk. Remember, your server with 4 disks in RAID 5 has maybe about 2.5 times the IOPS of your desktop if you are using SATA, but you're running a lot more on the server.
0
 
LVL 78

Expert Comment

by:David Johnson, CD, MVP
ID: 38742038
if the bottleneck is the iops then changing the cpu, adding memory, changing the o/s will result in minimal gains. You put a volkswagon bug engine in a ferrarri it still will go only as fast as if it was in a camaro.
0
 

Author Comment

by:DougPenneman
ID: 38742061
I've been playing with this today. I have 3 VMs. One is actually in prodution and is the one I am concerned with. It runs great if it is the only one started. The other two have Server 2008 R2 fresh installs and nothing else. If I start them up, the clients and I notice a big slow down. I just expected more out of it I guess. I appreciate all you help on this.
0
 

Author Comment

by:DougPenneman
ID: 38744318
What do you guys use to actually watch the Disk IO as its happening to see when the bottlenecks occur and what causes them?
0
 
LVL 8

Expert Comment

by:gsmartin
ID: 38744466
ManageEngine.com OpManager, which has a good variety of free tools for small network environments including OpManager.  On your Windows server, Windows Performance Monitor (Aka PerfMon).  Pull up Task Manager --> then Performance tab --> at the bottom of Task Manager window click on the Advance option --> this open up an area to see more in-depth details on Memory, CPU, Network, and Disk IO usage.

Also, you can download and install ioMeter, which you can benchmark your storage performance.
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

Back in July, I blogged about how Microsoft's new server pricing model, combined with the end of the Small Business Server package, would result in significant cost increases for many small businesses (see SBS End of Life: Microsoft Punishes Small B…
Few best practices specific to Network Configurations to be considered while deploying a Hyper-V infrastructure. It may not be the full list, but this could be a base line. Dedicated Network: Always consider dedicated network/VLAN for Hyper-V…
This video discusses moving either the default database or any database to a new volume.
In this seventh video of the Xpdf series, we discuss and demonstrate the PDFfonts utility, which lists all the fonts used in a PDF file. It does this via a command line interface, making it suitable for use in programs, scripts, batch files — any pl…

708 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

19 Experts available now in Live!

Get 1:1 Help Now