Rumsey Menu

VMware Part 2

System Integrators

VMware Part 2

I previously wrote about the use of VMware on a desktop/workstation environment – a scenario commonly known as a ‘Type 2’ or ‘Hosted’ hypervisor. This is a good use for VMware in a relatively isolated environment already running its own Operating System. For example - your laptop is running Windows 7, then you install VMware Workstation (a Type 2 hypervisor), and gain the ability to share the resources of your laptop to run additional instances of another OS in separate VMs.

Moving past the workstation-oriented environment, how can this same concept be applied to servers, for example?

This is commonly known as a ‘Type 1’ or ‘Native’ hypervisor. You’ll probably also see the term ‘bare metal’ applied to this type of virtualization primarily because the hypervisor itself is installed on the server’s ‘bare metal’ hard drive hardware rather than within an OS.  These servers are usually purpose-built specifically to run only the hypervisor and dedicate all the server resources towards running virtual machines.

In data centers everywhere, the push towards virtualization became immensely popular in the 2000s and has only increased to the extent where a physical server running one instance of Windows Server is a rarity.

Why is this?

First and foremost, it allowed for the consolidation of physical servers – take for example my own experience with the VMware native hypervisor, referred to as ‘ESXi’. Interesting fact – ‘ESX’ was a marketing term that stands for ‘Elastic Sky X’, which at the time of creation, probably sounded super-futuristic. The small ‘i’ was later added when the ESX hypervisor migrated off the Red Hat Linux foundation to its own purpose-built software, but I digress.

I was working with a Microsoft Exchange mail server system that was distributed across seven physical servers, each holding a single role, as was the custom at the time. When my organization decided that virtualization was stable enough to run in production in the mid-2000s, one of my objectives was to migrate these physical servers to virtual machines (VMs) – in a process known as a ‘P2V’ migration. It was bafflingly simple and our net result was seven servers consolidated into our ESXi cluster, leaving behind seven decommissioned physical servers that could be repurposed and at the very least would no longer be drawing power or have the added cooling requirements, thus reducing our electricity bills and no longer requiring hardware maintenance contracts and support.

The word ‘cluster’ was mentioned...

The word ‘cluster’ was mentioned….and this is really the answer to the question of “well, I’ve just consolidated everything into one server, what happens when some aspect of that server dies?” With VMware, two or more ESXi hosts can be consolidated into a single system using centralized management and storage, forming a cluster – or more specifically, a ‘high availability failover cluster’. What this means is that ALL resources – compute, memory, and storage, are centralized and shared amongst all of the VMs, regardless of which host might be ‘controlling’ the VM.

(Abstract, right? It’s a tough thing to wrap your head around when you haven’t seen it in action, but it’s so abstract that in the IT industry, it’s literally referred to as ‘hardware/software abstraction’…. So it is not like no one is aware of it.)

Anyway, this ‘clustering’ allows the system to perform a task called ‘vMotion’ which migrates a server to another host without any effect on running processes. You can believe seeing this for the first time better than 10 years ago was completely mind-blowing – it kind of still is. What this does is allow VMs to continue running even with the loss of an entire bare metal host. The beauty is that it only takes a minimum of 2 nodes to accomplish this clustering.

What other benefits can be derived from server virtualization?

Beyond reducing electrical, maintenance, and support requirements, what other benefits can be derived from server virtualization? Consider a hardware refresh cycle in a typical industrial control environment. Realistically, these systems do not change for 10-20 years? The mentality has long been ‘the money is in the metal’ and therefore the metal is rarely changed. With VMware, you can replace hardware without changing the VM; it is exists as a file and may be transferred and moved to any compatible hypervisor with no ill effects. Conversely, you can migrate all VMs to a different piece of host hardware (a new physical server, for example) with equal lack of consequences. Truly incredible.

I also mentioned centralized management. How do you control all of this? VMware’s solution is known as the vCenter server, which serves as the hub of your entire virtual infrastructure – where you can create additional VMs, move them, delete them, create templates to rapidly deploy additional servers, patch, scan, remediate, create snapshots to act as failsafes before software deployments, perform data replication services for longer-term disaster recovery… the list is almost endless. With DRS (Distributed Resource Scheduler), you can even automate the movement of VMs to different hosts based on load without rebooting or other gaps in uptime. The entire package of ESXi plus the vCenter server is known as the ‘vSphere’.

One more great feature to using VMs is how FAST they are – not only in deployment, but also with startups and reboots… going from a possible 10 minute plus reboot cycle on a physical server to seconds is truly invaluable.  Now one side note – if you have a high-availability need that doesn’t tolerate a reboot or applications that have unpredictable start times (like those using real-time logging and database files) there are also hardware and software combinations that eliminate this too and are invaluable -– like Stratus Fault Tolerant High-Availability Systems with VCenter.  We’ll save those details for a later post but the combination is amazing and worth checking out!

Summary

From ease of deployment, the versatility of having VMs, consolidate control, vastly improved uptime, not to even mention the increased hardware maintenance, energy, and cooling costs, server virtualization makes a whole lot of sense.  Need help with your virtualization?  Want to kick around ideas or discuss Rockwell Software on Virtual Servers?  Shoot me a note – dkang@rumsey.com!

Resources:

Published August 22, 2017
Dave Kang, Network Security and Info Solutions Specialist
Dave Kang has spent the last 13 years working in multiple aspects of Information Technology after graduating from Penn State University. He has held roles in Help Desk Support, Network Administration, Systems Administration, and leadership roles such as Director of Information Systems. His experience lies primarily in infrastructure design and management – both physical and virtual - working closely with vendors such as Cisco, VMware, EMC, Microsoft Azure, and Check Point. Dave is a Cisco Certified Industrial Networking Specialist working at Rumsey Electric, as the Networking, Security & Information Solutions Specialist. The role helps bridge the gap between IT and OT networks and operates as a liaison between the disparate networks in pursuit of the ‘Connected Enterprise’.