

{Editors Note: In our quest to provide great original content we are adding research notes to our coverage of cloud computing. This is the beginning of a series of posts from Glenda Sue Canfield aka Citrixgurl who will be providing her analysis of technology trends. Glenda is also posting more indepth technical pieces on DABBC site founded by Doug Brown. Doug’s site DABBC is a great community of experts who lead the charge in technical analysis. Please welcome Glenda to the SiliconANGLE group. }
For the past five years there has been a huge amount of attention and debate around hypervisors’ in general. Originally Hypervisors were being leveraged for Server Hardware Consolidation, High Availability/Fail-Over and Disaster Recovery. The latest use case associated with hypervisors, both VMware and Citrix XenServer is VDI or Virtual Desktops as Citrix would prefer the solution to be called (after all VDI makes you think of VMware).
VMware’s ESX hypervisor is based on a proprietary version of Red Hat Linux; their ESXi is based on a proprietary version of BusyBox Linux. Citrix’s XenServer is based on XenSource, an ongoing Open Source Linux initiative around the kernel (or source code) which leads us to XenServer which Citrix acquired and added proprietary features to, but still allows for absorption of open source features, which are developed at no cost to Citrix, they can “theoretically” incorporate them into their XenServer Product just by going out to Xen’s repository @ http://xenbits.xensource.com – on a regular basis and checking if there are any utilities, tools or features that can enhance their XenServer product line, then assimilate them (like the Borg☺).
VDI or Virtual Desktops are essentially client operating systems, such as Windows XP, Vista, Windows 7, certain versions of Linux such as Ubuntu and even Apple’s Mac OS (though this is still pretty raw and needs to get some bugs shaken out of it) in isolated environments that allow users to be able to work while not impacting other users if they encounter OS issues. It’s basically a 1 to 1 scenario, the same as an end user working on a local PC that crashed; it would only impact that individual. The basic difference is that the users Virtual Desktop(s) happen to be in a data-center sharing space and resources with any number of other individual users on a very powerful hardware and network platform, Clearly this improves performance significantly because all the actual transactions are processed on the Virtual Desktop (and associated Host Hardware) that live in the same Data Center as Storage and backend Database(s), If fiber is being used between the Virtual Desktop server(s) and storage servers performance just screams.
So that is a 10 thousand foot view of what VDI\Virtual Desktops are, now let’s talk use case and later we will examine the various protocols on the market and how they interact and enhance Virtual Desktop Solutions. VDI or Virtual Desktops are not as far along in real world usage as they should be or will be. I predict that will change somewhat once Client Hypervisors become more viable (not to mention accessible) and consumer market barriers can be overcome. That’s not to say Virtual Desktop’s will ever be the ONLY solution or even the BEST solution for most enterprise class environments (I am excluding any mention\reference to consumer market potential for Virtual Desktops starting now).
Primary use cases for Virtual Desktops:
1. Centralization and control of company data and client operating systems.
2. Developers who write and compile code on various operating systems (Dev Environments).
3. QA Environments, a virtual environment that is a mirror of a production environment, that allows future production changes/upgrades of client OS’s to be tested before being rolled into production.
4. Training Environments typically built with specific software and burned down at the end of the class to be rebuilt for new or different training environment class content.
5. Legacy applications that will not work or do not play well in multi-user environments like MS Terminal Servers or Citrix Environments. (Note: These are EXTREMELY rare, but you do find them in banking and medical environments once in a blue moon)
6. As an alternative to Publishing out a Terminal Server or Citrix XenApp Server Desktop, this has never been a recommended practice but I guarantee it is being done in 99% of production environments. It is not recommended because it is virtually impossible to lock down a multi-user environment and because of this one user could potentially cause the entire server to crash impacting every other user logged into that server.
7. The carbon footprint in the company, in other words lowering the cost of power and cooling for the entire company by replacing PC’s with Thin Clients. However there is an important reality most people miss about this use case. This solution just relocates the power and cooling costs from the end user and gives it a home in the data center thereby increasing the carbon footprint in the data center, kind of robbing Peter to pay Paul. To be fair there are currently data centers being built to specifically use safe power generation (i.e. solar\wind, etc) to mitigate the carbon footprint in those data centers.
Another thing to keep in mind is that hardware companies are making their products more power efficient with each new generation.
Now for a history lesson on the evolution of VDI:
Blade PC’s (HP) have been around seems like forever, since 2003. I believe this was the first incarnation of the VDI concept. However, it has always been viewed as a “hardware” solution, therefore not scalable and in fact a “boutique solution”. HP has also had their own Broker Solution called SAM and their own protocol called RGS for a long time. In conjunction it works great (in fact that is the only deployment scenario HP recommends), though they never really understood what they had and did not spend any real “development money” on SAM, RGS or blade PC’s not even ILO management integration. Adding ILO cards to blade PC’s requires changing the form factor design, therefore it was deemed an unjustified feature. They have rectified that with the Workstation Blade solution, they also handed ownership of Workstation Blade’s over to the server side of the house because they finally understood its place in the universe. HP fundamentally see’s themselves as a Hardware company even though revenue on their software sales has in fact tripled in the last few years. They are really good at coming up with or acquiring promising software and letting it languish on the vine. Also, let’s not forget their recent acquisition of EDS essentially an Application Service Provider or as marketing would coin it, a “Private Cloud” solution organization.
It seems inevitable that a Virtualization Company such as VMware would look at this Blade PC concept and see potential, especially after conquering the Server Virtualization space in enterprise production environments (if you can put a server operating system on top of a hypervisor infrastructure with such success, what’s to stop you from doing it with client operation systems?). Every technology must grow and create new verticals/markets in order to stay in business and continue to thrive. At this point the biggest hurdle (both for VDI and SBC) is multi-media and graphics. It has been a challenge that many technology manufacturers both hardware and software have been trying to solve for years, though the importance and urgency has escalated over the last three years because of Virtualization Infrastructure environments becoming adopted at such a steady pace. This is not driven only by the onset of hypervisors and much more powerful hardware (very underutilized) both on the server side and client side. It’s also about the telecommuting and BYOPC (Bring your own PC) trends that are starting to emerge.
Note: When thinking of BYOPC I no longer think of the access device (i.e. PC’s, Laptops, Thin Clients or even smart phones). I see BYOPC as having all of your personal, productivity and line of business applications existing in one or more secure data centers and people being able to access it from anywhere at any time using any access device (I guess I am seeing “clouds” everywhere now).
Applications are now starting to be developed with .Net 3.X architecture standards; these apps are typically graphically intense and require a graphical processing unit (GPU) card to get a true real time multi-media experience, on the data center side and the end point side. The true challenge is, when people are accessing their data from anywhere using any device, how do you provide users with a graphic experience that feels local while it is being processed and then transmitted from two thousand miles away?
This is a highly complex problem and is being attacked from many different angles. Citrix has been working on this since 1988 in fact Citrix has always considered their protocol their crown jewel, which makes sense because for a VERY long time it has been, because of the lack of any real competition. Today it is “virtually” impossible to get this experience in any remote access scenario whether based on a VDI or SBC/TS/Citrix scenarios, most servers do not have GPU cards in them. Then you add a hypervisor into the mix and even if the Host Server does have a GPU card installed it cannot be shared among the various Virtual Machines because of I/O (contention) issues. It would be remiss not to mention RemoteFX (being brought to us by Microsoft via their Calista acquisition). Microsoft is working on a release for Windows Server 2008 R2 SP1.
RemoteFX has a major amount of enhancements to the RDP protocol all focused on multi-media. I have heard it described as “secret sauce” that will allow for GPU virtualization and will support not only Virtual Desktop (i.e. Hyper-V, we are talking about Microsoft) solutions but Microsoft Terminal Server solutions as well. It is supposed to enable users to connect to their VM or Terminal Server Session via a “LAN” and get the Windows Aero Desktop experience, full-motion video, Silverlight animations and 3D applications with a real-time feel. It is also going to support OpenGL. So that makes it 100% likely that other hypervisor vendors (Citrix and VMware) will be introducing their own flavors of Virtual GPU’s eventually.
Bottom Line Today Though:
There must be a GPU that has the capability of processing graphically intense and 3D applications in the datacenter host machine to have any capability to deliver rich media to an endpoint. Another caveat is the necessity of having a protocol that is capable of delivering those graphics. There is a great article, written back in February of 2008 about delivering Vista Aero Glass with VDI on Brian Madden’s site http://bit.ly/37KpyD the sad thing is that it is still basically the truth. Two years later the only real option for complete local multi-media performance in conjunction with VDI or Virtual Desktops is using a Blade PC, Workstation Blade or Teradici hardware on the backend data center host. All of the above hardware Virtual Desktop solutions also have specific client dependencies (in the case of Teradici you have to have a proprietary hardware chip at the end point, while Citrix and HP use software codec’s on end points). This also has to be a literal 1 to 1 connection, meaning a hardware device being monopolized by only one connection in the Data Center from a hardware device on the client side.
To be clear I am only speaking to the full multi-media experience when referring to Teradici’s hardware solution because of its capability to deliver rich media by leveraging hardware. We will get to the PCoverIP software version that VMware teamed up with Teradici to develop when we start digging into the various protocols and how they work.
Microsoft: RDP – ones of the grandfathers of remote access protocols, it really is not a viable solution in a VDI scenario, unless you have a very small environment and it is being leveraged ONLY in a LAN (local are connection) environment. From what I have read and heard RemoteFX will initially be geared to LAN as well. In the Terminal Server 2008 pre Beta 3 release TS W2K8 had the capability of delivering Vista Aero Glass from a 2008 Terminal Server with a graphic card installed to a Vista OS client also with a GPU installed. However after doing benchmark testing it was discovered that it added approximately 30% of latency for every concurrent connection and was pulled before it was released to the public. Ironically at the time, there was a great deal of speculation that RDP 6.0 would be an ICA Killer. Let’s see if that becomes more of a truth with the release of RemoteFX.
HP: Remote Graphics Software (RGS) – The RGS protocol allows real-time remote access to graphics over a LAN (only), it also uses a proprietary HP Codec that has to be installed on the end point. RGS has backend GPU dependencies. This is a fantastic solution for 3D apps like CAD/CAM environments. There have been persistent rumors for over a year that HP is planning to end of life this protocol. Officially they deny this, but as stated before HP fundamentally see themselves as a hardware company and they also must realize that Microsoft, Citrix, Quest’s Provision Network and VMware are expending a large amount of resources to develop the best enterprise class solution. In their case it would make sense to EOL RGS, their hardware supports any of the protocols and associated solutions currently being deployed in production environments, Some examples would be VMware, Citrix and Quests Provision Network software and that will absolutely continue to be the case (no software manufacturer would purposely exclude HP as part of their integration testing).
It’s a very common saying, “No one has ever been fired for buying HP”. So really from HP’s perspective, why expend a huge amount of time and energy when you have partners who have been doing it longer and have much deeper visibility into its complexity. Why re-invent the wheel?
Citrix: Hi Definition Experience (HDX) – Citrix has been working on the ICA\PortICA\HDX protocol since 1988. It is a protocol which uses TCP as its source, over the years they have slowly added more virtual channels, one of the most important being the Thin Wire Virtual Channel. What I find compelling about the latest release is they finally gave up the ghost and decided to redirect any media or website (such as online training courses) dependant on Flash to the end point device so it does not overload the Citrix Server processor when all the company employees decide to log in at the last minute to go through their company ethic/compliance training. To be “completely accurate” HDX provides a foundational set of technologies for server-rendered multi-media delivery that covers all media formats and client devices, then opportunistically looks to offload the work to the client device when it can (if the client can handle it, etc). This approach provides full support for all media players and formats while maximizing server scalability (which is especially important for Flash, which is a recourse hog),
Historically Citrix has resisted doing this because they want to make the end point irrelevant. They have always maintained that they wanted to own the user experience from the client (front end) to the server (back end) and everything betwixt and between. Of course having a flash player on the access device\end point is required, in some environments flash is not installed on the standard user operating system image. However, 90% of Citrix’s customers work on Windows which comes with a native flash player anyway.
According to Derek Thorslund, Sr. Product Manager\Strategist for HDX, “Citrix has gotten overwhelming positive feedback from customers about Flash redirection”.
I can confirm that myself, I have gotten the same story from customers I have had the opportunity to visit with. You can follow him on twitter @derektcitrix or read his blog http://bit.ly/bE1Hs0 on the Citrix community blog site. I have been following both him and Juan Rivera’s (Sr. Development Manager, HDX) who you can also follow on twitter @juancitrix or read his blog http://bit.ly/cP5nZH progress around multi-media delivery methods for years. Derek posted a particularly interesting blog about HDX Flash redirection for Linux Thin Clients recently.
VMware: PCoverIP – This is basically the first incarnation of the protocol development partnership between VMware and Teradici. It is based on UDP which is a real time protocol that works well on high latency networks; UDP is used for things like VOIP. UDP is not as dependable at packet delivery as TCP based protocols. It is considered lossloss and it is -“technically”. It builds up to lossloss, it is not immediately lossloss. It works very well when providing Virtual Desktops to developers in India or on a LAN environment. It is surprisingly scalable and the performance is better than would be expected from a first attempt at a proprietary protocol. I am very excited to see how it will evolve.
All of the protocols we have looked at here are in fact screen-scrape based protocols. No one has really figured out how to offload the graphic primitives generated on a server to the client device to be reconstructed using the local end point software and hardware. It is about having a software engine on the server & client side that can take those primitives deliver them and reconstruct them in real time, No One Has This Figured Out.
Today, VDI does not scale like Terminal Sever or Citrix when it comes to user density and management overhead. From what I am seeing today, VDI is being treated as an ancillary solution or add on for specific roles, most of which (not all) are non production and don’t have SLA’s tied to them. I just don’t see Virtual Desktops replacing Server Based Multi-User Environments; I see them complementing them at least for the near future. After VDI\Virtual Desktops and client hypervisors mature, who knows?
What I do know for sure is this is an exciting time to be in this space and I am excited to be along for the ride!
THANK YOU