Quantcast
Channel: Storage at Microsoft
Viewing all articles
Browse latest Browse all 268

iSCSI Target Server in Windows Server 2012 R2 for VMM Rapid Provisioning

$
0
0

Context

iSCSI Target Server shipped its SMI-S provider as part of the OS distribution for the first time in Windows Server 2012 R2. For more details on its design and features, see my previous blog post iSCSI Target Server in Windows Server 2012 R2.

System Center 2012 SP1 Virtual Machine Manager (VMM) and later versions manage the storage presented from an iSCSI Target Server for provisioning block storage to Hyper-V hosts. VMM configuration guidance for managing the iSCSI Target Server running on Windows Server 2012 is available in this TechNet page. This guidance is still accurate for Windows Server 2012 R2 with the two following exceptions.

  1. iSCSI Target Server SMI-S provider is now included as part of the OS distribution, so you no longer need to install it from VMM media. In fact, only the SMI-S provider included in Windows Server 2012 R2 distribution is the compatible supported provider. Further, when you install the iSCSI Target Server feature, the right SMI-S provider is transparently installed.
  2. SAN-based Rapid Provisioning scenario of VMM requires one additional step to work with iSCSI Target Server

The rest of this blog post is all about #2.

VMM SAN-based Rapid Provisioning

VMM SAN-based rapid provisioning, as the name suggests, helps an administrator rapidly provision new Hyper-V virtual machines. The key to this fast provisioning is copying the VHD files for the new virtual machine in the most efficient possible manner. In this case, VMM relies on iSCSI Target Server snapshot functionality to accomplish this. Specifically, iSCSI Target Server SMI-S provider exposes this snapshot functionality for usage by SM-API storage management framework, which VMM then uses to create iSCSI Virtual Disk snapshots. As a brief aside, check out my previous blog post for examples on how the same iSCSI SMI-S snapshot functionality can be used by a storage administrator directly via SM-API Storage cmdlets, outside of VMM.

Let’s focus back on VMM though, especially on the snapshot-related VMM rapid provisioning work flow and what each of these steps mean to the iSCSI Target Server:

  1. Administrator creates, customizes, and generalizes (syspreps) the desired VM OS image on a storage volume, hosted on iSCSI Target Server storage
    • iSCSI Target Server perspective: It simply exposes a VHDX-based Virtual Disk as a SCSI disk to the connecting initiator. All the creation, customization and sysprep actions are simply I/Os on that SCSI Logical Unit (LU).
  2. Administrator mounts that SCSI LU on the VMM Library Server, let’s call it Disk-G for Golden, hosting the storage volume. Administrator also makes sure to mask the LU from any other initiators.
    • iSCSI Target Server perspective: Disk-G is persisted as a VHDX format file on the hosting volume, but the initiator (Library Server) does not know or care about this server-side implementation detail
  3. Administrator creates a VM template and associates the generalized SAN copy-capable OS image VHD files to this template. This process thus makes the template a SAN copy-capable VM template.
    • iSCSI Target Server perspective: This action is transparent to the iSCSI Target Server, it does not participate unless there are specific related I/Os to Disk-G
  4. From this point on, VMM can rapidly provision each new VM by creating a snapshot of Disk-G (say Disk-S1, Disk-S2 etc.) and assigning it to the appropriate Hyper-V host that will host the new VM guest being instantiated.
    • iSCSI Target Server perspective: For each disk snapshot taken via SMI-S, iSCSI Target Server creates a Diff VHDX file to store its content, so effectively:
      • Disk-G image parent VHDX file
      • Disk-S1 image Diff VHDX (Disk-G is parent)
      • Disk-S2 image Diff VHDX (Disk-G is parent)

For a more detailed discussion of SAN-based VMM rapid provisioning concepts, see this TechNet Library article.

The entire scenario of course works flawlessly both in Windows Server 2012 and Windows Server 2012 R2. However in Windows Server 2012 R2, it turns out the storage administrator needs to take one additional step between Steps #3 and #4 – let’s call it “Step 3.5” – in the preceding list. Let’s then discuss what exactly changed in Windows Server 2012 R2 and what the additional step is.

iSCSI Target Server SMI-S Snapshots in Windows Server 2012 R2

On each successful SMI-S snapshot request, iSCSI Target Server creates a Diff VHDX-based iSCSI Virtual Disk. In Windows Server 2012 R2, iSCSI Target Server realizes this through native Hyper-V APIs. In contrast, iSCSI Target Server used to have its own implementation of creating Diff VHD files back in Windows Server 2012 – see my discussion of redesigned persistence layer in Windows Server 2012 R2 in one of my earlier blog posts for more detail. The new Hyper-V APIs enforce that the parent VHDX file must not be open in read & write mode while the new Diff VHDX is being created. This is to ensure that the parent VHDX can no longer be written to, once the diff VHDX is created. Thus while creating Disk-S1/S2 iSCSI Target Server SMI-S snapshots in the example discussion, Disk-G cannot stay mounted for read/write by the Library Server. Disk-G must be unmounted and re-mounted as read-only disk first – otherwise, creation of snapshots Disk-S1 and Disk-S2 will fail.

Now you might be wondering why this wasn’t an issue in Windows Server 2012-based iSCSI Target Server. iSCSI Target Server’s private implementation of Diff VHD creation in Windows Server 2012 did not enforce the read-only requirement on the parent VHD file, but the VMM Library server (initiator) always ensures that no more writes are performed on Disk-G once the sysprep process is complete. So the overall solution worked just fine in Windows Server 2012. With Windows Server 2012 R2 though, in addition to the same initiator behavior, iSCSI Target Server is effectively adding an additional layer of safety on the target (server) side to ensure writes are simply not possible at all on Disk-G. This is additional goodness.

I have briefly alluded to the nature of the additional process step required for rapid provisioning, but here’s the complete list of actions within that additional step:

“Step 3.5”:

  • Save the volume mount point (and the drive letter if applicable) and offline the iSCSI LU on the Library Server.
  • Unmap the LU from its current masking set (Remove-VirtualDiskFromMaskingSet). This ensures that the LU under the previous read/write access permissions can no longer be accessed by any initiator.
  • Re-add the same SCSI LU (Add-VirtualDiskToMaskingSet) back to the same masking set, albeit this time as read-only through the “-DeviceAccesses ReadOnly” PS parameter. This sets the disk access to Read-Only.
    • Note: Only one Library Server should have the volume mounted off that SCSI LU. Even if the Library Server is configured as a highly-available failover cluster, only one of the cluster nodes should have mounted the disk at one time.
  • Online the SCSI LU on the Library Server and restore its previous mount point (and drive letter, if applicable)

Here is the good news. My colleague Juan Tian has written a sample Windows PowerShell script which takes a single VMM template name parameter and performs all the above actions in the “Step 3.5” in one sweep. The script should work without any changes if you run it with VMM administration credentials. Feel free to check it out at the bottom of the blog post, customize if necessary for your deployment, and be sure to run it as “Step 3.5” in the VMM rapid deployment work flow that I summarized above.

Finally, let me wrap up this blog post with a quick version compatibility reference:

VMM Version

VMM Runs on

Manages

Compatible?

VMM 2012 SP1

WS2012

iSCSI on WS2012

Yes

VMM 2012 SP1

WS2012

iSCSI on WS2012 R2

Yes

VMM 2012 R2

WS2012 R2

iSCSI on WS2012 R2

Yes

VMM 2012 R2

WS2012

iSCSI on WS2012 R2

Yes

VMM 2012 SP1

WS2012 R2

<Any>

No

Hope this blog post provided you all the required details you need to move to production with your Windows Server 2012 R2-based new iSCSI Target Server and VMM. Give it a try and let me know how it’s working for you!

 

>>>>>>>>>SetLibraryServerLUToReadOnly.ps1 Windows PowerShell script>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

# Description:

# Script to set a VMM Library Server access of a SCSI Logical Unit (LU) to read-only access.

# Library Server and the disk (SCSI LU) are identified based on the VM template name parameter

# Script always offlines the disk, removes it from the masking set, and re-adds the disk back in as read-only.

# Script finally re-mounts the newly-read-only disk on the Library Server.

# Must be run with the VMM administration credentials

#

param([string] $VMTemplate = "")

if(!$VMTemplate)

{

$VMTemplate = Read-Host "Enter the name of your Template: "

}

Write-Host "Get Template $VMTemplate"

$libShare=(Get-SCLibraryShare -ID ((Get-SCVMTemplate -name $VMTemplate | Get-SCVirtualHardDisk).libraryshareid))

$share=$libShare.path

if ($share.count -lt 1)

{

Write-Host "Cannot find libraryshare!"

exit 1

}

Write-Host "Get libraryshare $share"

$path=(Get-SCVMTemplate -name $VMTemplate| Get-SCVirtualHardDisk).directory

if ($path.count -lt 1)

{

Write-Host "Cannot find SCVirtualHardDisk!"

exit 1

}

Write-Host "Get virtualdisk directory $path"

$path2=$path.replace($share, "")

$key="*"+$path2+"*"

Write-Host "Get Key $key"

$lib=($libShare.LibraryServer).FQDN

if ($lib.count -lt 1)

{

Write-Host "Cannot find libraryserver!"

exit 1

}

Write-Host "Get libraryserver $lib"

$partition = Invoke-Command -computername $lib -scriptblock {get-partition } | where-object {$_.accesspaths -like $key}

if (!$partition)

{

Write-Host "Cannot find disk partition!"

exit 1

}

$disk = $partition | Invoke-Command -computername $lib -scriptblock {get-disk}

if (!$disk)

{

Write-Host "Cannot find disk!"

exit 1

}

#offline disk

Write-Host "Offline disk ..."

$disk | Invoke-Command -computername $lib -scriptblock {set-disk -isoffline $true}

Write-Host "Offline disk completed!"

Write-Host "Looking for disk.uniqueid - $disk.uniqueid"

$vdisk = Get-VirtualDisk | where-object {$_.uniqueid -match $disk.uniqueid}

if (!$vdisk)

{

Write-Host "Cannot find virtual disk!"

exit 1

}

$ms = $vdisk | get-maskingset

if (!$ms)

{

Write-Host "Cannot find maskingset!"

exit 1

}

#remove virtual disk from masking set

Write-Host "Call Remove-VirtualDiskFromMaskingSet ..."

Remove-VirtualDiskFromMaskingSet -maskingsetuniqueid $ms.uniqueid -virtualdisknames $vdisk.Name

Write-Host "Call Remove-VirtualDiskFromMaskingSet completed!"

#add virtual disk back in masking set

Write-Host "Call Add-VirtualDiskToMaskingSet ..."

Add-VirtualDiskToMaskingSet -maskingsetuniqueid $ms.uniqueid -virtualdisknames $vdisk.Name -deviceaccesses ReadOnly

Write-Host "Call Add-VirtualDiskToMaskingSet completed!"

#online disk

Write-Host "Online disk ..."

$disk | Invoke-Command -computername $lib -scriptblock {set-disk -isoffline $false}

Write-Host "Online disk completed!”


Viewing all articles
Browse latest Browse all 268

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>