There has long been some overlap between the definition of “Guest iSCSI connections” and VMware’s RDM (or what Microsoft might call a passthrough). They are similar in nature, but very different in many ways. One of those ways is how or if Optical Prime can see these volumes.
RDMs or Passthrough Disks
These historically were needed basically for Fibre Channel arrays since mounting a FC connection in a guest VM was not possible. This has since been corrected, but they are still used based on design preference.
With both iSCSI and FC RDMs a volume or LUN is exported from the array and presented to the hypervisor. To simplify this we will talk only in VMware terms. These are typically converted into what is called a DataStore, formatted in VMFS, and presented as usable space in which to store VMs.
An RDM doesn’t go through the last steps. It is still presented and mounted by the Hypervisor, but it’s never told to be a datastore and it’s never formatted. Instead it’s “passed off” to a VM in its RAW format and the OS of the VM will see it as a disk that can be formatted directly by the VM Operating System (OS).
There are a lot of positive things about this. That disk can be moved or attached to another VM and it can be managed separately from the OS disk. Thus allowing data protection and OS protection policies that are independent of each other. RDMs also allowed the layering of NTFS virtual disks sitting on a VMFS formatted datastore to by bypassed and this could allow performance tricks like sector alignment to be achieved in VMs.
The key here is that hypervisor itself is the one orchestrating this action, thus it has knowledge of the RDM itself. The hypervisor owns the disk. The easy way to tell if it’s an RDM is if you right click the properties of the VM and the disk shows in the properties screen, then the Hypervisor is aware.
Guest iSCSI connections
In the beginning RDMs weren’t all that beneficial when you chose iSCSI as your fabric. VMware only had basic Failover Paths and iSCSI only was available in 1Gb connections. So that meant the most you could get out of a RDM with iSCSI was 1Gb of speed. Compared to an RDM of FC that might have been, say 4Gb, this was pretty slow.
However, iSCSI providers figured out if you to add multiple NICs to the ISCSI vSwitch, force a 1x1 relationship to the virtual NICs, then you could and still can effectively add multiple paths to a VM to boost performance and get full speed connectivity from a VM straight to the storage array. This method used virtual NICs, but it bypassed the regulations of the vSwitch. Most notably; capacity and multipathing policies.
This was slightly better than FC RDMs since these disks were very portable. Could be seamlessly moved between physical and virtual machine, plus they could be replicated or even backed up independently of the hypervisor’s involvement. The other HUGE advantage here was that the disk size was subject to the rules of the guest’s OS and NOT the hypervisors. VMware’s datastore limit at the time was 2TBs, yet windows could format up to 200+.
The key here is that the Hypervisor has NO knowledge of the disk as it’s a straight relationship between the VM’s OS and the storage array.
In almost all cases this was used as a work around to hypervisor limitations or to gain flexibility for specific use. i.e. virtualizing SQL or a file server.
The impact to Optical Prime
An Optical Prime project in Live Optics will measure IO and capacity by speaking to the target operating system; physical or virtual. So with a physical machine, it will see those iSCSI disks as the OS is reporting them as internal disk.
Optical Prime also communicates and records performance for VMware directly through the Hypervisor’s API. Since the hypervisor has knowledge of the RDM, it will also give you data on RDMs for performance. Once small caveat for RDMs is that the Guest VM’s OS is in control of the formatting and capacity usage. So Optical Primemust report RDMs as being fully used since we only are given knowledge of the size of the disk by the hypervisor.
When Optical Prime is recording a hypervisor cluster it can NOT see the guest iSCSI connections as the Hypervisor has no knowledge of its existence. In these cases you would need to record the VM itself in addition to the hypervisor cluster. At the time of reporting you would simply drop the OS drive and keep only the guest iSCSI disk information to avoid any “C drive” double count.
It is something to watch out for, but the good news is that since iSCSI has innovated to 10Gb and since VMware now has better multipathing available as well as larger datastore limitations, the need for Guest iSCSI connections as a work around has diminished.
Now the use of either option is really a design point. However, since you will often use Optical Prime to look at a legacy system for the purpose of upgrades, you will need to be on the lookout for ways you might miss some data needed for your design.
Twitter: @SJKirchoff - #OpticalPrime