Context
Windows Server 2012 R2 ships with a rich set of standards-compliant storage management functionality. This functionality was originally introduced in Windows Server 2012, and you should reference Jeff’s excellent blog that introduced related concepts.
iSCSI Target Server on its part shipped its SMI-S provider as part of the OS distribution for the first time in Windows Server 2012 R2. For more details on its design and features, see my previous blog post iSCSI Target Server in Windows Server 2012 R2.
I will briefly summarize here how the iSCSI Target Server SMI-S provider fits into Storage management ecosystem, please refer to Jeff’s blog post for the related deep-dive discussion. Storage cmdlets are logically part of the Storage Management API (SM-API) layer, and the ‘Windows Standards-based Storage Management Service’ plumbs the Storage cmdlet management interactions through to the new iSCSI Target Server SMI-S provider. This blog post is all about how you can use the new SMI-S provider hands-on, using the SM-API Storage cmdlets. Note that iSCSI Target Sever can alternatively be managed through its native iSCSI-specific cmdlets, and related WMI provider APIs. While the native approach allows you to comprehensively manage all iSCSI-specific configuration aspects, the Storage cmdlet approach helps you normalize on a standard set of cmdlets across different technologies (e.g. Storage Spaces, 3rd party storage arrays).
Before we jump into the discussion of Storage cmdlet-based management, you should keep two critical caveats in mind:
- Jeff’s caution about having multiple management clients potentially tripping up each other; to quote –
“You should think carefully about where you want to install this service – in a datacenter you would centralize the management of your resources as much as practicable, and you don’t want to have too many points of device management all competing to control the storage devices. This can result in conflicting changes and possibly even data loss or corruption if too many users can manage the same arrays.”
- You must decide how you want to manage a Windows-based iSCSI Target Server starting from the time the feature is installed on Windows Server. To begin with, you are free to choose from either WMI/PowerShell, or SM-API/SMI-S. But once you start using one management approach, you must stick with that approach until the iSCSI Target Server is decommissioned. Switching between the management approaches could leave the iSCSI Target Server in an inconsistent and unmanageable state, and potentially could even cause data loss.
For most users, the compelling reason for managing an iSCSI Target Server through SMI-S is usually one of the following:
- They have existing scripts that already are written to the Windows Storage cmdlets and the Windows SMI-S cmdlets, and want to use the same scripts to manage Windows Server-based iSCSI Target Server, or,
- They use 3rd party storage management products that consume the Storage Management API (SM-API) and they plan to manage Windows Server-based iSCSI Target Server using that same 3rd party software, or,
- (Perhaps most likely) They use SCVMM to manage iSCSI Target Server-based storage through SMI-S. This can be accomplished using SCVMM-specific cmdlets and UI, and is covered in detail elsewhere. So it will not be the focus of this blog post, we will focus only on non-SCVMM management approach in this blog post.
Relating Terminology
Let us do a quick review of SM-API/SMI-S concepts so you can intuitively relate them to native Windows terminology as we move into hands-on discussion:
SM-API or SMI-S concept | iSCSI Target Server implementation term | More Detail |
SMI-S Provider | WMI provider | Manageability end point for the iSCSI Target Server; an iSCSI Target Server SMI-S provider is actually also built off WMI architecture under the covers. |
Storage Pools | Hosting Volume where the VHD files are stored | Storage pools in SMI-S subdivide the total available capacity in the system into groups as desired by the administrator. In the iSCSI Target Server design, each virtual disk is persisted on a file system hosted on a Windows volume. |
Storage Volume | iSCSI Virtual Disk (SCSI Logical Unit) | An SMI-S Storage Volume is the allocation of storage capacity exposed by the storage system - a storage volume is provisioned out of a storage pool. Windows Server Storage Service implementation calls this a ‘Virtual Disk’. iSCSI Target Server design calls this an iSCSI virtual disk, see New-IscsiVirtualDisk |
Masking operation | Removal of a mapping | Masking of a storage volume removes access to that SCSI LU from an initiator. In iSCSI Target Server design, it is not possible to selectively mask a single LU from an individual initiator, although a single LU can be removed from the SCSI Target (Remove-iSCSIVirtualDiskTargetMapping). The access privilege can then be removed from an initiator at the target scope. |
Unmasking operation | Adding a mapping | Unmasking of a storage volume grants access to that SCSI LU to an initiator. In iSCSI Target Server design, it is not possible to selectively unmask a single LU from an individual initiator, although a single LU can be added to SCSI Target (Add-iSCSIVirtualDiskTargetMapping). The access privilege can then be granted to an initiator at the target scope. |
SCSI Protocol Controller (SPC) | SCSI Target | SCSI Protocol controller refers to the initiator view of the target. In Windows Server Storage Service implementation, this is logically equivalent to a masking set, which then iSCSI Target Server realizes as a SCSI Target, see New-IscsiServerTarget |
Snapshots | Snapshots | The terminology is the same on this one, but there are a couple of critical differences to keep in mind between the volsnap-based iSCSI virtual disk snapshots that you can create with a Checkpoint-IscsiVirtualDisk, versus the Diff VHD-based snapshot on the original VHD that you can create with a New-VirtualDiskSnapshot. The former is a read-only snapshot, whereas the latter is a writable snapshot. And be aware that you cannot manage a snapshot taken in one management approach (say, WMI) via the tools in the other approach (say, SMI-S). |
Storage Subsystem | iSCSI Target Server | This is a straightforward mapping for standalone iSCSI Target Servers where the iSCSI SMI-S provider implementation is simply an embedded SMI-S provider just for that target server. In the case of a clustered iSCSI Target Server however, the SMI-S provider at the client access point reports not only the storage subsystems (iSCSI Target Server resource groups) owned by that cluster node, but also any additional iSCSI Target Server resource groups owned by rest of the failover cluster nodes – reporting each as a storage subsystem. Put differently, the SMI-S provider then acts like an embedded provider for that cluster node, and as a proxy SMI-S provider for the rest of that cluster. |
Register SMI-S Provider on a Management Client
To register an SMI-S provider, you need to know the provider’s URI – it is the machine name for a standalone target and the cluster resource group name in the case of a clustered iSCSI Target Server – and credentials for a user account that is in that target server’s local administrators security group. In the following example, the SMI-S provider can be accessed on the machine “fsf-7809-09” and the user account with administrative privileges is “contoso\user1”. Get-Credential cmdlet prompts for the password at run time (you can reference Get-Credential page for other more scripting-friendly, albeit less secure, options to accomplish the same).
Discover the Storage Objects
After registering the provider, you can now update storage provider cache to get an inventory of all the manageable storage objects through this SMI-S provider:
You can then list the storage subsystem and related details. Note that although the following screen shots show items related to Storage Spaces, they are unrelated to iSCSI Target Server SMI-S provider. iSCSI Target Server SMI-S provider items are highlighted in green.
You can inspect the available storage pools and filter by the friendly name, as shown in the example. Notice that the hosting volumes, which you already know iSCSI Target Server reports as storage pools, carry friendly names that include respective drive letters on the target server. Also notice the Primordial storage pool, which effectively represents entire capacity available on the iSCSI Target Server. However, keep in mind that you can create SCSI Logical Units only out of what SMI-S calls “concrete storage pools”, i.e. only from pools which have the ‘IsPrimordial’ attribute set to false in the following screen shot.
Create Storage Objects
The first operation we show carves out a new logical disk out of an existing concrete storage pool. You can use one of two cmdlets to create a new virtual disk: New-VirtualDisk and New-StorageSubsystemVirtualDisk. Technically, iSCSI Target Server SMI-S provider works fine with either cmdlet and we will show examples for both, although you probably want to use the New-VirtualDisk so you can intentionally select the storage pool to provision the storage volume from. New-StorageSubsystemVirtualDisk in contrast auto-selects the storage pool to provision the capacity from.
Or you can provision the possible maximum-sized virtual disk by using “-UseMaximumSize” parameter as shown:
If you prefer to use the New-StorageSubsystemVirtualDisk cmdlet, you need to specify the storage subsystem parameter, and in the example below, you can see it auto-selected a storage pool in the selected subsystem - the “iSCSITarget: FSF-7809-09: C:” pool.
With the New-VirtualDiskSnapshot cmdlet, you can take a snapshot of a virtual disk.
To create masking set for a storage subsystem, you can use New-MaskingSet. You must include at least one initiator access in the new masking set. You can also map one or multiple virtual disks to the new masking set. An iSCSI initiator is identified by its iqn name and the virtual disks through their names. The script below creates a new masking set and adds one initiator and two virtual disks. And then we will query the new masking set to confirm the details.
Modify and Remove the Storage Objects
With the Resize-VirtualDisk cmdlet, you can expand an existing virtual disk as shown in the following example. Note however that you will still need to extend the partition and the volume to be able to make the additional capacity usable.
You can also modify the masking set that you’ve just created, by adding additional virtual disks or additional initiators to it. Look at the following examples. Note however that you do not really want to share a virtual disk with a file system across multiple initiators unless the initiators (hosts) are clustered, else the setup will inevitably cause data corruption sooner or later!
You can of course also remove the masking set and virtual disks that you just created, as we illustrate in the following examples. Further, note also the order of operations – you have to remove a virtual disk first from masking sets (or remove the masking sets), and then delete the virtual disk.
Finally, when you no longer need to manage the iSCSI Target Server from this management client, you can unregister the SMI-S provider as shown in the following example.
I want to acknowledge my colleague Juan Tian who had helped me with all the preceding Windows PowerShell examples.
Finally, I sincerely hope that you now have the tools you need to work with an iSCSI Target Server SMI-S provider. Give it a try and let me know how it’s working for you!