Solved

Terminal Servers - physical versus virtual - follow up on: http://www.experts-exchange.com/Software/Virtualization/Q_27980333.html

Posted on 2012-12-30
2
262 Views
Last Modified: 2013-01-07
Following up on http://www.experts-exchange.com/Software/Virtualization/Q_27980333.html I would like to know what the differences between physical and virtual TS's in a clustered farm in the following specific circumstances:

physical TS server freezes (freeze is distinguished from drastic slowdown in the following questions)
vs Core host of TS VM freezes
vs TS session freezes

physical server reboots
vs Core host of TS VM's reboots
vs TS VM itself reboots

What difference would the user see in these circumstances?  

Would a physical TS freeze mean users would be redirected to another node in the cluster? Or would it just be a stuck session?  If it was redirected how long would it take?

Would a freeze on a Core hosting VM's allow for TS access even though the host is not accessible? It seems I've seen that situation

In the case of a reboot of a physical TS, does it make a difference how long the reboot takes before the physical TS server is failed over?  

In the case of a virtualized TS, if the Core host reboots does it make a difference how the long the reboot takes before the VM is failed over?

It seems as well that drastic performance degradation as opposed to outright hardware failure on a TS physical or virtual will not cause failover.  Is that correct?

Also in the case of a constantly rebooting host - physical or Core host - would there not be problems with users being connected to the rebooting server during its up time window?

In the case of the host not rebooting but the TS VM rebooting constantly it seems one could quickly/simply bring the TS VM down and nobody would be able to log into it.  Agree/disagree?

And it would be quite possible then to simply add a dupe of the VM to the server - in fact, it could failover to another hosted VM on the server itself so there would be less performance issues caused by more people being redirected and logging into one of the other non-rebooting physical  TS servers in the cluster.  Make sense? Any problems I haven't given enough consideration to with the failover-on-the-same server VM scenario?

Also from my experiences drastic slowness/freezes are more common than physical failure in TS environments. Is that the sense of  responding Experts here as well?

Finally it seems to me that most of the time Dev and Test 'networks' that mimic the production network are pretty well defacto virtualized as there is a lot  of 'playing around' going on - there always seem to be new versions of programs that have to be deployed.  It seems then that if one has a physical TS farm then that there is a lot of converting virtual to physical to rollout a new 'version' and as well backing up the current production to become the new Test/Dev involves physical to virtual which makes rollback to a physical network more complicated. Again, fee free to debate - I am exploring and want to make sure that I am not simply agreeing with myself in my design choices.

Thanks in advance for all time put into answering the above.
0
Comment
Question by:lineonecorp
2 Comments
 
LVL 117

Accepted Solution

by:
Andrew Hancock (VMware vExpert / EE MVE) earned 500 total points
Comment Utility
If a physical TS server freezes, user sessions after a period would become disconnected, they would not be redirected to a new server, all data would be lost, read/write locks would still be on the Word Documents.

When re-connecting to the TS servers (new servers) profiles would not be updated, and they would have issues opening documents.

The above is true for both virtual and physical TS servers.

Sometimes with a frozen TS server, connections are still tried to the frozen server, causing no more connections to be made, unless the TS (frozen server is fixed).

Host can freeze, and the VMs will continue to operate normally.

If a physical host fails, there would be small amount of time, before Cluster Failover Occurs, during this window, new clients would not be able to connect. Both the same for physical and virtual (although virtual machines startup quicker than physical!)


Correct, Performance degradtion will not cause a failover, you need Good Load Balancing.

If failover clustering is configured correctly, a misbeaving server physical or core, should not cause issues with the farm.

If you have servers configured ready to add to your farm, you can add them on demand.

Yes, slowness is much more common in virtual environments. You will observe less concurrent connections in a VM environment compared to a physical environment.

Most our clients, do not do, Virtual TS or Citrix servers for this very reason.

All our clients have spare physical TS/Citrix or virtual servers for testing new applications.

some have two-six packaging servers, (dev), and two-five (test) servers, these test servers are added to the farm, for users to test new applications and fixes (for up to four weeks) when the users sign-off the change, before deployment to the rest of the large farm.

Client farms vary in size from 50-250, and some have over 426 published applications.
0
 

Author Comment

by:lineonecorp
Comment Utility
Thanks for the all the points.

I guess after all the back and forth virtual wins out for me - having an OS in any way tied to the hardware it's running on seems  a step backward - nothing about superior 'performance' that might be the case with 'physical' comes anywhere near the operational flexibility of being hardware independent.  Here is another way I would put the fundamental question.  

If for some reason you could only have physical or virtualized systems but never a mixed system which would you chose?

Also if  there were no cost or performance difference between having a system physical or having it VM'd on itself - in other words I could either install a 2008 TS physical  or I could use the same box and install a single 2008 TS VM in it that would have no performance loss compared to the physical host what would be your choice then? So in a scenario of 200 servers let's say you could choose to have the 200 servers physical versus having the 200 virtualized on themselves so to speak on themselves at no extra cost, which would you choose?

I understand that these are 'unreal, hypothetical' scenarios but I find when I ask myself these questions without consideration of cost/performance, I always choose the virtual option as in the real world I actually always have the option to do that - we are not talking about a theoretical 'future' technology when it comes to virtualization. So what it comes down to me is that virtual is 'superior' and it's putting a value on that superiority is what's in question. For instance, let's say I need a lot of performance for a specific high-powered application that needs a standalone very high end computer to run it. My view is that if the app is that critical and the hardware needed to run it has to be that powerful, any cost to virtualize that application on the hardware and if necessary even to boost the hardware to accommodate it is immediately paid back in today's cheap hardware world by the benefit of hardware independence. The idea of running any application that is tied to any hardware at the mission-critical level seems to me a law-suit waiting to happen - I would never live with it for my own mission-critical system and having  the redundancy option consist of multiple physically dependent systems is not near as good as the redundancy of multiple physically-independent systems - it's just the single physically-independent system flaw taken to a different level.

I guess I could sum it up as such - you boot up a computer and you get a screen:

Would you like the software on this computer to be

A) tied to the hardware on this computer or

B) be reusable on any other computer



I guess I would have to have somebody do a lot of arguing with me why I would choose A.
 

Feel free to take an opposing view or something in-between.
0

Featured Post

Why You Should Analyze Threat Actor TTPs

After years of analyzing threat actor behavior, it’s become clear that at any given time there are specific tactics, techniques, and procedures (TTPs) that are particularly prevalent. By analyzing and understanding these TTPs, you can dramatically enhance your security program.

Join & Write a Comment

VM backup deduplication is a method of reducing the amount of storage space needed to save VM backups. In most organizations, VMs contain many duplicate copies of data, such as VMs deployed from the same template, VMs with the same OS, or VMs that h…
Will try to explain how to use the VMware feature TAGs in the VMs and create Veeam Backup Jobs using TAGs. Since this article is too long, I will create second article for the Veeam tasks.
How to install and configure Citrix XenApp 6.5 - Part 1. In this video tutorial we have explained step by step installation of Citrix XenApp 6.5 Server on Windows Server 2008 R2 is explained in this video. We have explained the difference between…
In this video tutorial I show you the main steps to install and configure  a VMware ESXi6.0 server. The video has my comments as text on the screen and you can pause anytime when needed. Hope this will be helpful. Verify that your hardware and BIO…

744 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now