One of the many great things I am most interested about is learning anything about computers. When I was in college, we only have those old school bulky main frames used to integrate microprocessor units with computers to control electrical circuitry, automate heavy equipment, and many other systems control for our school projects and thesis.
In the new century, computer architecture has rapidly transformed into a much more diversified mechanism. One of these technologies is the introduction of virtualization. Virtualization has become widespread used in most firms for a many feasible reasons. Let us talk about what this Virtualization application is all about.
What Is Virtualization?
Virtualization takes your everyday notion of computing and turns it on its head. Left becomes right. Red becomes blue. Physical computers become virtual computers. It might sound complicated -- properly configuring a real computer is hard enough -- but once you see it in action, it makes perfect sense. In the simplest terms, virtualization is the process of using special software -- a class of programs called hypervisors or virtual machine managers -- to create a complete environment in which a guest operating system can function as though it were installed on its own computer. That guest environment is called a virtual machine (VM). This image shows one such example: a system running Windows 7 using a program called VMware Workstation to host a virtual machine running Ubuntu Linux.
Ask 100 people what the term virtual means and you'll get a lot of different answers. Most people define virtual with words like "fake" or "pretend," but these terms only begin to describe it. Let's try to zero in on virtualization using a term that hopefully you've heard: virtual reality. For most of us, the idea of virtual reality starts with someone wearing headgear and gloves, as shown in this example.
The headgear and the gloves work together to create a simulation of a world or environment that appears to be real, even though the person wearing them is located in a room that doesn't resemble the simulated space. Inside this virtual reality you can see the world by turning your head, just as you do in the real world. Software works with the headset's inputs to emulate a physical world. At the same time, the gloves enable you to touch and move objects in the virtual world.
To make virtual reality effective, the hardware and software need to work together to create an environment convincing enough for a human to work within it. Virtual reality doesn't have to be perfect -- it has limitations -- but it's pretty cool for teaching someone how to fly a plane or do a spacewalk, for example, without having to start with the real thing (as seen here).
The Hypervisor
A normal operating system uses programming called a supervisor to handle very low level interaction among hardware and software, such as task scheduling, allotment of time and resources, and so on.
Because virtualization enables one machine -- called the host -- to run multiple operating systems simultaneously, full virtualization requires an extra layer of sophisticated programming to manage the vastly more complex interactions. One common method calls this extra programming a hypervisor or virtual machine manager (VMM).
A hypervisor has to handle every input and output that the operating system would request of normal hardware. With a good hypervisor like VMware Workstation, you can easily add and remove virtual hard drives, virtual network cards, virtual RAM, and so on. This figure shows the Hardware Configuration screen from VMware Workstation.reates an environment that convinces humans they're in a real environment, virtualization convinces an operating system it's running on its own hardware.
Virtualization even goes so far as to provide a virtualized BIOS and System Setup for every virtual machine. The example shows VMware Workstation displaying the System Setup, just like you'd see it on a regular computer.
The host machine allocates real RAM and CPU time to every running virtual machine. If you want to run a number of virtual machines at the same time, make sure your host machine has plenty of CPU power and, more importantly, plenty of RAM to support all the running virtual machines!
Emulation vs. Virtualization
Virtualization takes the hardware of the host system and segments it into individual virtual machines. If you have an Intel system, a hypervisor creates a virtual machine that acts exactly like the host Intel system. It cannot act like any other type of computer. For example, you cannot make a virtual machine on an Intel system that acts like a Sony PlayStation 3. Hypervisors simply pass the code from the virtual machine to the actual CPU.
Emulation is very different from virtualization. An emulator is software or hardware that converts the commands to and from the host machine into an entirely different platform. This illustration shows a Super Nintendo Entertainment System emulator, Snes9X, running a game called Donkey Kong Kountry on a Windows system.
Sample Virtualization
You can perform virtualization in a number of ways; this course will show you several of them. Before I go any further, though, let's take the basic pieces you've learned about virtualization and put them together in one of its simpler forms. In this example, I'll use the popular VMware Workstation on a Windows 7 system and create a virtual machine running Ubuntu Linux.
Begin by obtaining a copy of VMware Workstation. This program isn't free, but VMware will give you a 30-day trial. Go to www.vmware.com to get a trial copy. A freshly installed copy of VMware Workstation looks like the example shown here.
Clicking New Virtual Machine prompts you for a typical or custom setup (shown here). These settings are only for backward-compatibility with earlier versions of VMware, so just click Next.
Why Do We Virtualize?
Virtualization has taken the networking world by storm, but for those who have never seen virtualization, the big question has got to be: Why? Let's talk about the benefits of virtualization. In this section, keep two important things in mind:
A single hypervisor on a single system will happily run as many virtual machines as its RAM, CPU, and drive space allow. (RAM is almost always the main limiting factor.)
A single hypervisor on a single system will happily run as many virtual machines as its RAM, CPU, and drive space allow. (RAM is almost always the main limiting factor.)
A virtual machine that's shut down is little more than a file (or two) sitting on a hard drive.
Power Saving
Before virtualization, each server OS needed to be on a unique physical system. With virtualization, you can place multiple virtual servers on a single physical system, reducing electrical power use substantially. Rather than one machine running Windows 2008 and acting as a file server and DNS server, and a second machine running Linux for a DHCP server, for example, the same computer can handle both operating systems simultaneously. Expand this electricity savings over an enterprise network or on a data server farm and the savings -- both in terms of dollars spent and electricity used -- are tremendous.
Hardware Consolidation
Similar to power saving, why buy a high-end server, complete with multiple processors, RAID arrays, redundant power supplies, and so on, and only run a single server? With virtualization, you can easily beef up the RAM and run a number of servers on a single box.
System Recovery
Possibly the most popular reason for virtualizing is to keep uptime percentage as high as possible. Let's say you have a Web server installed on a single system. If that system goes down -- due to hacking, malware, or so on -- you need to restore the system from a backup, which may or may not be easily at hand. With virtualization, you merely need to shut down the virtual machine and reload an alternative copy of it.
Think of virtual machines like you would a word processing document. Virtual machines don't have a "File | Save" equivalent, but they do have something called a snapshot that enables you to save an extra copy of the virtual machine as it is exactly at the moment the snapshot is taken. This image shows VMware Workstation saving a snapshot.
System Duplication
Closely tied to system recovery, system duplication takes advantage of the fact that VMs are simply files, and like any file, they can be copied. Let's say you want to teach 20 students about Ubuntu Linux. Depending on the hypervisor you choose (VMware does this extremely well), you can simply install a hypervisor on 20 machines and copy a single virtual machine to all the computers. Equally, if you have a virtualized Web server and need to add another Web server (assuming your physical box has the hardware to support it), why not just make a copy of the server and fire it up as well?
Research
Here's a great example that happens in my own company. As with any distributed program, one tend to get a few support calls. Running a problem through the same OS, even down to the service pack, helps me solve the problem.
In the previrtualization days, you commonly had seven to ten PCs, using dual-boot, each keeping copies of a particular Windows version. Today, a single hypervisor enables you to support a huge number of Windows versions on a single machine (shown in this figure).
Virtualization in Modern Networks
You've already seen virtualization in action with the example shown using VMware Workstation earlier in this course. Many networks use a few virtual machines to augment and refine a traditional network closet. VMware Workstation is how I first performed virtualization on PCs, but the technology and power have grown dramatically over the last few years.
VMware Workstation requires an underlying operating system, so it functions essentially like a very powerful desktop application. What if you could remove the OS altogether and create a bare-metal implementation of virtualization?
VMware introduced ESX in 2001 to accomplish this goal. ESX is a hypervisor that's powerful enough to replace the host operating system on a physical box, turning the physical machine into a machine that does nothing but support virtual machines. ESX, by itself, isn't much to look at; it's a tiny operating system/hypervisor that's usually installed on something other than a hard drive. This figure shows how I loaded my copy of ESX: via a small USB thumb drive. Power up the server; the server loads ESX off the thumb drive; and in short order, a very rudimentary interface appears where I can input essential information, such as a master password and a static IP address.
Don't let ESX's small size fool you. It's small because it only has one job: to host virtual machines. ESX is an extremely powerful operating system/hypervisor.
Some writers will use the term virtual machine manager to describe virtual machine software that runs on top of a host operating system. They'll use the term hypervisor to describe only software that does not need a host operating system. Using this terminology, VMware Workstation is a virtual machine manager and ESX is a hypervisor.
Other writers call both the hosted and bare-metal -- or native -- virtualization software products hypervisors, but make a distinction in other descriptive words (such as hosted or native).
Powerful hypervisors like ESXi are rarely administered directly at the box. Instead you use tools such as VMware's vSphere Client, so you can create, configure, and maintain virtual machines on the hypervisor server from the comfort of a client computer running this program. Once the VM is up and running, you can close the vSphere client, but the VM will continue to run happily on the server. For example, let's say you create a VM and install a Web server on that VM. As long as everything is running well on the Web server, you will find yourself using the vSphere client only to check on the Web server for occasional maintenance and administration.
So you now really have two different ways to virtualize: using virtual machine managers like VMware's Workstation to manage virtual desktops and using powerful hypervisors like ESX to manage virtual servers. Granted, you could run a server like a Web browser in VMware Workstation, and you also could run a copy of Windows 7 Ultimate from an ESX system. Nothing is wrong with doing either of these.
Thus far, this course sounds like an advertisement for VMware. VMware really brought virtualization to the PC world and still holds a strong presence, but there are a number of alternatives to VMware products. Let's see what else is available.
VMware Workstation
The granddaddy and front leader for virtualization, VMware Workstation, comes in both Windows and Linux versions. VMware Workstation runs virtually (PUN!) on any operating system you'll ever need and is incredibly stable and proven. Too bad it's not free.
One of the more interesting features of VMware Workstation is VMTools. VMTools adds useful features such as copy/cut and paste between the virtual desktop and the real desktop.
Virtual PC
Microsoft has offered a few different virtual machine managers over the years, with the current mainstream product being Windows Virtual PC (shown here). Windows Virtual PC is free, but has some serious limitations. First, it only works on Windows 7 Professional, Ultimate, and Enterprise. Second, it only officially supports Windows VMs, although a few intrepid souls have managed to get Linux working.
Parallels
Parallels is the most popular virtualization manager for Mac OS X (as shown in this image), although VMware Fusion is a close second. Parallels supports all popular operating systems, and even has a fair degree of 3-D graphics support; more so than even the mighty VMware. Parallels also offers Windows and Linux versions.
KVM
Of course, the open source world has its players too. While picking a single product to represent the Linux/UNIX world is hard, no one who knows virtualization would disagree that KVM from Redhat is a dominant player. Unlike the other virtual machine managers discussed, KVM also supports a few non-x86 processors. This is mainly used for Thin Clients which connect to the host where VM servers reside.
Hypervisors
While you have lots of choices when it comes to virtual machine managers, your choices for real embedded hypervisors are limited to the two biggies: VMware's ESX and Microsoft's Hyper-V. There are others such as Oracle's VM Server but nothing has the market share of ESX or Hyper-V.
Hyper-V
It is formerly known as Windows Server Virtualization, is a native hypervisor that enables platform virtualization on x86-64 systems. The supported Windows Operating Systems are Windows 8.1, Windows Server 2012 R2, Windows 8, Windows Server 2012, Windows Server 2008 R2, Windows Server 2008. Stable released last March 15, 2011 on R2 Service Pack 1 (KB976932).
Hyper-V exists in two variants:
As a stand-alone product called Hyper-V Server: Four major versions have so far been released: Hyper-V Server 2012 R2 (containing the current release of Hyper-V), Hyper-V Server 2012, Hyper-V Server 2008 R2 and Hyper-V Server 2008.
As an installable role in Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, Windows Server 2008 and the x64 edition of Windows 8 Pro.
The stand-alone versions of Hyper-V are free. Hyper-V Server 2008 was released on October 1, 2008. It is a variant of the core installation of Windows Server 2008 that includes full Hyper-V functionality; other Windows Server 2008 roles are disabled, and there are limited Windows Services.[11] The free Hyper-V Server 2008 variant is limited to a command-line interface (CLI), where configuration of the "Host" or "Parent" (Hyper-V Server 2008) OS, physical hardware and software is done using shell commands. A new menu driven CLI interface does simplify initial configuration considerably, and some freely downloadable script files extend this concept. Administration and configuration of the "Host" (Hyper-V Server 2008 OS) and the "guest" or virtual OSes is generally done by downloading extended Microsoft Management Consoles that are installed onto a Windows 7 PC or Windows 2008 Server (32 or 64 bit) or System Center Virtual Machine.
Alternatively, another Windows Server 2012 (or 2008) computer, with the Hyper-V role installed, can be used to manage Hyper-V Server 2012 (or 2008) by redirecting the management console. Other administration and configuration of Hyper-V Server 2008 can be done using a Remote Desktop RDP session (though still CLI) or redirected standard management consoles (MMC) such as "Computer Management" and "Group Policy (Local)" from a Windows Vista PC or a full installation of Windows Server 2008. This allows much easier "point and click" configuration, and monitoring of the Hyper-V Server 2008. Hyper-V Server 2008 Release 2 (R2) was made available in September 2009, its main feature being the inclusion of Windows PowerShell v2 for greater CLI control, and the updated Windows Server 2008 R2 code base.
Host operating system:
- To install the Hyper-V role, Windows Server 2008, Windows Server 2008 R2 Standard, Enterprise or Datacenter edition, Windows Server 2012 Standard or Datacenter edition, or Windows 8(or 8.1) Pro or Enterprise edition is required. Hyper-V is only supported on x86-64 variants of Windows.
- It can be installed regardless of whether the installation is a full or core installation.
Processor:
- An x86-64 processor
- Hardware-assisted virtualization support: This is available in processors that include a virtualization option; specifically, Intel VT or AMD Virtualization (AMD-V, formerly code-named "Pacifica").
- A NX bit-compatible CPU must be available and Hardware Data Execution Prevention (DEP) must be enabled.
- Although this is not an official requirement, Windows Server 2008 R2 and a CPU with second-level address translation support are recommended for workstations.
- Second-level address translation is a mandatory requirement for Hyper-V in Windows 8.
Memory
- Minimum 2 GB. (Each virtual machine requires its own memory, and so realistically much more.)
- Windows Server 2008 Standard (x64) Hyper-V full GUI or Core supports up to 31 GB of memory for running VMs, plus 1 GB for the Hyper-V parent OS.
- Maximum total memory per system for Windows Server 2008 R2 hosts: 32 GB (Standard) or 2 TB (Enterprise, Datacenter)
- Maximum total memory per system for Windows Server 2012 hosts: 4 TB
Guest operating systems
- Hyper-V in Windows Server 2008 and 2008 R2 supports virtual machines with up to 4 processors each (1, 2, or 4 processors depending on guest OS-see below)
- Hyper-V in Windows Server 2012 supports virtual machines with up to 64 processors each.
- Hyper-V in Windows Server 2008 and 2008 R2 supports up to 384 VMs per system
- Hyper-V in Windows Server 2012 supports up to 1024 active virtual machines per system.
- Hyper-V supports both 32-bit (x86) and 64-bit (x64) guest VMs.
Microsoft Hyper-V Server
The stand-alone Hyper-V Server variant does not require an existing installation of Windows Server 2008 nor Windows Server 2008 R2. The standalone installation is called Microsoft Hyper-V Server for the non-R2 version and Microsoft Hyper-V Server 2008 R2. Microsoft Hyper-V server is built with components of Windows and has a Windows Server Core user experience. None of the other roles of Windows Server are available in Microsoft Hyper-V Server. This version supports up to 64 VMs per system. System requirements of Microsoft Hyper-V server are the same for supported guest operating systems and processor, but differ in the following:
· RAM: Minimum: 1 GB RAM; Recommended: 2 GB RAM or greater; Maximum 1 TB.
· Available disk space: Minimum: 8 GB; Recommended: 20 GB or greater.
Hyper-V Server 2012 R2 has the same capabilities as the standard Hyper-V role in Windows server 2012 R2 and supports 1024 active VMs
Hyper-V
Although Hyper-V can't stand toe-to-toe with ESX, it has a few aces up its sleeve that give it some intrigue. First, it's free. This is important in that ESX, with only a few extra add-ons, can cost thousands of dollars. Second, it comes as a stand-alone product or as part of Windows Server 2008 and even on some versions of Windows 7, making it easy for those who like to play to access it. Third, its simplicity makes it easier to learn for those new to using hypervisors. Watch Hyper-V. If Microsoft does one thing well, it's taking market share away from arguably better, more powerful competitors, while slowly making its product better.
Virtual Switches
Imagine for a moment that you have three virtual machines running as virtual desktops. You want all of these machines to have access to the Internet. Therefore, you need to give them all legitimate IP addresses. The physical server, however, only has a single NIC. There are two ways in which virtualization gives individual VMs valid IP addresses. The oldest and simplest way is to bridge the NIC. Each virtual NIC is given a bridged connection to the real NIC (shown in this figure). This bridge works at Layer 2 of the OSI model, so each virtual NIC gets a legitimate, unique MAC address.
Benefits of Virtualization
There are many benefits to an IT organization or business when choosing to implement a server virtualization strategy. With the technology we have today, there's no reason to remain idle and simply watch the parade on the sidelines. If you are still waiting to get into the game, here are 10 great reasons why you should be jumping into the server virtualization game with both feet. These are tried and true benefits that have withstood the test of time (in this case, the last 10 years).
1. Help move things to the cloud
Ah yes, the cloud! You knew it was coming at some point in this list, didn't you? As much as you think you've been talked to death about virtualizing your environment, that probably doesn't even compare to the amount of times in the last year alone that you've had someone talk to you about joining "the cloud." The good news here is that by virtualizing your servers and abstracting away the underlying hardware, you are preparing yourself for a move into the cloud. The first step may be to move from a simple virtualized data center to a private cloud. But as the public cloud matures, and the technology around it advances and you become more comfortable with the thought of moving data out of your data center and into a cloud hosting facility, you will have had a head start in getting there. The journey along the way will have better prepared you and the organization.
Ah yes, the cloud! You knew it was coming at some point in this list, didn't you? As much as you think you've been talked to death about virtualizing your environment, that probably doesn't even compare to the amount of times in the last year alone that you've had someone talk to you about joining "the cloud." The good news here is that by virtualizing your servers and abstracting away the underlying hardware, you are preparing yourself for a move into the cloud. The first step may be to move from a simple virtualized data center to a private cloud. But as the public cloud matures, and the technology around it advances and you become more comfortable with the thought of moving data out of your data center and into a cloud hosting facility, you will have had a head start in getting there. The journey along the way will have better prepared you and the organization.
2. Extend the life of older applications
Let's be honest -- you probably have old legacy applications still running in your environment. These applications probably fit into one or more of these categories: It doesn't run on a modern operating system, it may not run on newer hardware, your IT team is afraid to touch it, and chances are good that the person or company who created it is no longer around to update it. By virtualizing and encapsulating the application and its environment, you can extend its life, maintain uptime, and finally get rid of that old Pentium machine hidden in the corner of the data center. You know the one, it's all covered in dust with fingerprints from administrators long gone and names forgotten.
3. Isolate applications
In the physical world, data centers typically moved to a "one app/one server" model in order to isolate applications. But this caused physical server sprawl, increased costs, and underutilized servers. Server virtualization provides application isolation and removes application compatibility issues by consolidating many of these virtual machines across far fewer physical servers. This also cuts down on server waste by more fully utilizing the physical server resources and by provisioning virtual machines with the exact amount of CPU, memory, and storage resources that it needs.
4. Improve disaster recovery
Virtualization offers an organization three important components when it comes to building out a disaster recovery solution. The first is its hardware abstraction capability. By removing the dependency on a particular hardware vendor or server model, a disaster recovery site no longer needs to keep identical hardware on hand to match the production environment, and IT can save money by buying cheaper hardware in the DR site since it rarely gets used. Second, by consolidating servers down to fewer physical machines in production, an organization can more easily create an affordable replication site. And third, most enterprise server virtualization platforms have software that can help automate the failover when a disaster does strike. The same software usually provides a way to test a disaster recovery failover as well. Imagine being able to actually test and see your failover plan work in reality, rather than hoping and praying that it will work if and when the time comes.
5. Increase uptime
Most server virtualization platforms now offer a number of advanced features that just aren't found on physical servers, which helps with business continuity and increased uptime. Though the vendor feature names may be different, they usually offer capabilities such as live migration, storage migration, fault tolerance, high availability, and distributed resource scheduling. These technologies keep virtual machines chugging along or give them the ability to quickly recover from unplanned outages. The ability to quickly and easily move a virtual machine from one server to another is perhaps one of the greatest single benefits of virtualization with far-reaching uses. As the technology continues to mature to the point where it can do long-distance migrations, such as being able to move a virtual machine from one data center to another no matter the network latency involved, the virtual world will become that much more in demand.
6. Reduce hardware vendor lock-in
While not always a bad thing, sometimes being tied down to one particular server vendor or even one particular server model can prove quite frustrating. But because server virtualization abstracts away the underlying hardware and replaces it with virtual hardware, data center managers and owners gain a lot more flexibility when it comes to the server equipment they can choose from. This can also be a handy negotiating tool with the hardware vendors when the time comes to renew or purchase more equipment.
7. Faster server provisioning
As a data center administrator, imagine being able to provide your business units with near instant-on capacity when a request comes down the chain. Server virtualization enables elastic capacity to provide system provisioning and deployment at a moment's notice. You can quickly clone a gold image, master template, or existing virtual machine to get a server up and running within minutes. Remember that the next time you have to fill out purchase orders, wait for shipping and receiving, and then rack, stack, and cable a physical machine only to spend additional hours waiting for the operating system and applications to complete their installations. I've almost completely forgotten what it's like to click Next > Next > Next.
8. QA/lab environments
After completing a server consolidation exercise in the data center, why not donate that hardware to a QA group or build out a lab environment? Virtualization allows you to easily build out a self-contained lab or test environment, operating on its own isolated network. If you don't think this is useful or powerful, just look to VMware's own trade show, VMworld. This event creates one of the largest public virtual labs I've ever experienced, and it truly shows off what you can do with a virtual lab environment. While this is probably way more lab than you'd ever actually need in your own environment, you can see how building something like this would be cost prohibitive with purely physical servers, and in many cases, technologically improbable.
9. Reduce the data center footprint
This one goes hand in hand with the previous benefit. In addition to saving more of your company's green with a smaller energy footprint, server consolidation with virtualization will also reduce the overall footprint of your entire data center. That means far fewer servers, less networking gear, a smaller number of racks needed -- all of which translates into less data center floor space required. That can further save you money if you don't happen to own your own data center and instead make use of a co-location facility.
10. Save energy, go green
Maybe you aren't a "save the whales" or "tree hugging" type of person. That's cool. I don't wear the T-shirts either. But seriously, who isn't interested in saving energy in 2011? Migrating physical servers over to virtual machines and consolidating them onto far fewer physical servers means lowering monthly power and cooling costs in the data center. This was an early victory chant for server virtualization vendors back in the early part of 2000, and it still holds true today.
One good example of virtualization implementation for process control and automation
No comments:
Post a Comment