VMWare is Cool, or maybe “VMware is Hot”!

I have done some more studying of VMware and its product line and they certainly seem to have it together in their product offering. It should be noted here that virtualization technology predates VMware and that they have competition in Microsoft Hyper-V, Xensource, Red Hat, etc., but I am going to focus on VMware for now. The basic premise of the technology, for the uninitiated, is virtualization of servers (and desktops, but more on that later) on physical machines. A server running an application on a physical machine is “virtualized”  – that is the software, data, network interface card, RAM, cpu, storage, bios, etc. are all turned into code elements and run as a “virtual machine” on another server that can then hold a number of these virtual machines. The initial driving force of this technology was server consolidation. It is typical to be able to average reducing 15 existing servers to 1 after virtualization. There are obvious hardware savings to doing this as well as energy, maintenance and rack space savings.

Thanks to the wonderful world of competition the basic software tool that allows the virtualization of a server is available for free from both (and not by coincidence) VMware and Microsoft.   This tool is called a hypervisor and the latest VMware hypervisor is ESXi – again,  freely available.

The VMware world has moved way beyond the hypervisor itself – although that technology remains at the core. The main thrust of data center offerings by VMware is around central  management of servers for reliability, energy savings and efficiency of operations. This is where some of the the way cool stuff happens – once you get jaded with 15 or 20 servers running on one box!

The main VMware product is Vsphere which provides centralized management of the virtual servers, running ESXi  or ESX, under its control.  Aside from really efficient central management and control, some of the impressive features available include Vmotion which allows you to migrate a server from one host machine to another on the fly -while the server is running – with no loss of accessibility!  Other modules can monitor the load on a pool of servers and shift operating load so that some servers can be idle while others are fully utilized.  Those servers that are not needed can also be powered down and restarted when needed.

I mentioned virtual desktops earlier and this I think is really exciting technology. “Exciting? ” you might say.  While I am not excited by the average new technical gizmo, major shifts in how we provide computing capabilities to users, huge new markets and technical challenges are at the least very interesting. Running around and maintaining desktop PCs all through a big office is a huge waste of time and the whole PC interaction with its software and other devices is a a mess that as an engineer I have always felt was designed for kids, by kids!  The “VMware View” approach to enterprise desktops, to reduce desktops to virtual machines – basically files on a central server that can be copied, saved, recreated, provisioned for new setups, etc. all in minutes,  is a very powerful paradigm shift.  That the approach is already migrating to smaller environments as well is a given.  Big changes ahead and the change has great promise!

Virtual Servers

OK, I resisted virtual servers a bit even early last year. It seemed as though technicians wanted to virtualize every client’s server no matter how small the operation or what the issue to be addressed! I am suspicious of hype and of those who get carried away with technology just because it is new or dubbed “hot”. I have to admit now that virtual servers do have many and deep applications in the small and mid-sized business space.

Simply put a “virtual server” is just a sort of sub-server which is run on a very real plain old standard hardware server box. The magic is that the virtual server -meaning it’s very real standard old operating system software and what ever applications and data that server has, is run by a special piece of software on the real hardware server -the host. The special software that does this becomes the operating system for the real hardware server -the core or host server if you will. This special software is relatively simple and yet complex. It is simple in that this special software takes care of few basic functions -just the operation of the virtual servers and ways to backup, and troubleshoot, etc. those virtual machines. It does not have active directory or print drivers, or all the myriad functions of a regular operating system.

Now one of the beauties of this system is that this “special software” can run more than one “virtual” machine. Most simply you could have two virtual servers on the one hardware box. One could be an Exchange server and the other a domain /file server for example. these “servers” would be completely separate each with their own operating system and applications and data in regular old windows file folders, etc.
What is the magic “special software” that runs the virtual servers on a host server box? VMware and Microsoft’s Hyper-V are the two most well known applications that do this job, although there are others.

How can you suddenly run more servers on one box when it seems the technical folks have been constantly nagging to get bigger and bigger boxes – how many time shave we heard, “to do that you need more RAM, more processor capacity, etc.” ? Now that is a good question and one that you need to watch for good answers to. Sometimes the technical folks in smaller deployments do not take into account the full demands on the box that is suddenly running 2 or more servers. However, there are real technical reasons for some of this new found power. One reason is the processors we now have -Dual core, Quad core etc. -the processors of today have some serious power and have moved beyond the needs of average computing requirements. This excess power can be put to good use with virtual servers. Virtualization is way into mainstream and provides serious benefits. No need to resist the force.

How to reduce IT costs

An enterprise VAR survey is quoted in the July 2009 INFOSTORE magazine issue on the “biggest opportunities” for customer to “reduce IT costs”. By far the biggest option was “Virtualization” with 49% of respondent mentioning it. The second choice was a surprising one – “data deduplication” – with 18% of respondents listing it. The #3 choice – way down at 4% was the not very innovative, “delay purchases”!
Data deduplication if you haven’t heard about it is an innovative way to reduce storage requirements. At a simple level if you store a 15MB email attachment on your network –there may be 10 or many more copies of that attachment in various mail boxes -all taking up storage space. Data deduplication would mean retaining just one copy with a pointer to that copy where the other copies would be. This concept can be carried down to the data block or bit level. An algorithm can assign a hash number to each string of data and store one data copy and the indexed hash numbers. In this way, your data storage requirement can be greatly reduced. So far, the main application for data deduplication has been in backup software. Note that there are risks –as with any data compression method – so care should be taken in selecting tools to do this job. Big firms with huge data storage requirements are obviously the first targets for the technology.
Virtualization – choice number 1 in this survey – is a money saver even for, and perhaps especially for, firms that are quite small. I say especially for small firms because you can get the first step copy of VMware or Microsoft’s HyperV at no cost. Now if you have one or two servers, virtualization is of no real utility, but when a special application, separate Exchange server, etc. comes along beyond that, virtualization can save costs and add powerful disaster recovery options. Of course the savings really grow as you get into more and more servers.