Because your virtualization environment may grow or change, its virtual machines (VM) may need additional storage disks. Perhaps an Exchange server needs more room for a new private store or a SQL Server requires a new database. Even file servers grow too large and need additional space. This tip covers the three major approaches to installing Hyper-V storage disks.
In the physical world, a server's storage needs can be fulfilled by only a few options. You plug in a new direct-attached disk. You expose a new logical unit number from your Fibre Channel or iSCSI storage area network. Initialize and format the new disk in Disk Management and you're off to the races.
When a Hyper-V VM's storage needs change, however, there are several architectures available -- but not all options make sense in every environment. Some methods to connect Hyper-V storage disks may seem practical but cause backup and restore issues down the road. Other options are available only if you have a specific storage area network (SAN) architecture. Let's take a look at different ways to add Hyper-V storage disks in a data center.
Creating and attaching a new VHD
The most obvious solution for installing Hyper-V storage disks is to simply create and attach new Virtual Hard Disks (VHDs). While SCSI disks can be added or removed from running VMs in Windows Server 2008 R2, Integrated Drive Electronics (IDE) disks and new storage controllers can be added or removed only when a VM is powered off.
New VHDs are created within the VM properties and are generally stored in the same location where a VM's initial VHD file resides. Even though it's not a requirement, storing VHDs together makes finding them easier. If you're running Hyper-V in a clustered configuration, it also prevents potential hiccups with Live Migration,.
Attached VHDs are beneficial because they consolidate the entire contents of a disk into a single file. This means that
host-based backups can easily capture the VHD's single file for a whole-server restoration or disaster recovery.
The downside of VHD encapsulation, however, occurs when a backup technology requires extra steps to recover individual files from a VHD. Therefore, carefully consider the options in a backup solution before choosing attached VHDs.
Pass-through disks
Pass-through disks utilize SAN connections. These storage disks are exposed via a Fibre Channel or iSCSI connection to the Hyper-V host. Once exposed and initialized to the host, the disk is then passed through to a VM on that host.
As with connected VHDs, this process also requires that the VM is powered off when it's an IDE disk type. SCSI disks, on the other hand, can be passed through while the VM runs.
Once the disk is exposed and visible within the host's Disk Management console, it must be initialized but remain offline. In the VM's properties menu, create a new hard drive, and set the Media selection to Physical Hard Disk. Next, select the correct disk among the list available.
Pass-through disks are useful for applications that do not support VHD encapsulations. Because a pass-through disk remains in a raw format on a SAN, it can be backed up and restored on a file-by-file basis via SAN-based backups. If your backup method works better when restored files aren't a part of a VHD, this approach might be better.
Remember, though, that Hyper-V cannot snapshot pass-through disks, nor can it use host-based backups. The backup agent on the host cannot see through the VHD and, again, through its connected disks to complete the backup. In clustered environments, pass-through disks must be exposed to all the Hyper-V servers that may host VMs. Therefore, this may add substantial complexity to large clustered environment.
iSCSI direct attachment
With Fibre Channel SANs, you're limited to pass-through disks as your primary non-VHD option. On the other hand, if an iSCSI-based SAN is available, another compelling option is to directly attach a SAN storage disk via iSCSI. Typically, this configuration brings about interesting management improvements compared with the other two options.
First and foremost is the complete isolation of iSCSI disk processing from host processing. When a VM is connected to an iSCSI disk, it does so through its existing storage connection. This means that direct-attached disks do not require the participation of hosts to function. Thus, fewer elements must coordinate to maintain the disk's connection.
Direct-attached iSCSI disks are also more portable than pass-through disks because of the complete isolation from the host. A Hyper-V VM with a connected iSCSI disk can be moved to a new cluster or even converted to a different hypervisor while retaining its disk connection. Connected iSCSI disks just need to be exposed to the VM and not to each Hyper-V host. This substantially reduces the complexity in clustered environments.
Similar to pass-through disks, iSCSI disks do not encapsulate files and folders; therefore, both methods share comparable backup and restore characteristics. One difference, however, is that disconnecting an iSCSI disk from a server and reconnecting it to another requires fewer steps. As such, iSCSI disks can be moved around with relative ease between VMs and even their physical counterparts. Furthermore, VM-to-VM Windows Failover Clusters can be created only with iSCSI direct attachment.
With Hyper-V R2, encapsulated VHDs perform at nearly the same speed as non-encapsulated logical unit numbers, particularly when VHDs are created with fixed sizes. Thus, performance is not as much of a factor in determining which path to take.
Today, adding Hyper-V storage disks depends mainly on the capabilities required by your backup solution as well as the type of SAN architecture in place.
No comments:
Post a Comment