Hello, Claus here again. Enjoying the great weather in Washington and enjoying the view from my office, I thought I would share some notes about standing up Storage Spaces Direct using virtual machines in Azure and create a shared-nothing Scale-Out File Server. This scenario is not supported for production workloads, but might be useful for dev/test.
Using the Azure portal, I:
- Created four virtual machines
- 1x DC named cj-dc
- 3x storage nodes named cj-vm1, cj-vm2 and cj-vm3
- Created and attached a 128GB premium data disk to each of the storage nodes
I used DS1 virtual machines and the Windows Server 2016 TP5 template.
Domain Controller
I promoted the domain controller with domain name contoso.com. Once the domain controller setup finished, I changed the Azure virtual network configuration to use ‘Custom DNS’, with the IP address of the domain controller (see picture below).
I restarted the virtual machines to pick up this change. With the DNS server configured I joined all 3 virtual machines to the domain.
Failover Clustering
Next I needed to form a failover cluster. I ran the following to install the Failover Clustering feature on all the nodes:
$nodes = ("CJ-VM1", "CJ-VM2", "CJ-VM3") icm $nodes {Install-WindowsFeature Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools}
With the feature installed, I formed a cluster:
New-Cluster -Name CJ-CLU -Node $nodes –NoStorage –StaticAddress 10.0.0.10
Storage Spaces Direct
With a functioning cluster, I looked at the attached disks:
Get-PhysicalDisk | ? CanPool -EQ 1 | FT FriendlyName, BusType, MediaType, Size FriendlyName BusType MediaType Size ------------ ------- --------- ---- Msft Virtual Disk SAS UnSpecified 137438953472 Msft Virtual Disk SAS UnSpecified 137438953472 Msft Virtual Disk SAS UnSpecified 137438953472
Storage Spaces Direct uses BusType and MediaType to automatically configure caching, storage pool and storage tiering. In Azure virtual machines (as in Hyper-V virtual machines), the media type is reported as unspecified. To work around this, I enabled Storage Spaces Direct with some additional parameters:
Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 -SkipEligibilityChecks
I disabled caching since I don’t have any caching devices. I turned off automatic configuration and I skipped eligibility checks, both to work around the media type issue. This meant manually configuring the storage pool. With Storage Spaces Direct enabled, I manually created the storage pool:
New-StoragePool -StorageSubSystemFriendlyName *Cluster* -FriendlyName S2D -ProvisioningTypeDefault Fixed -PhysicalDisk (Get-PhysicalDisk | ? CanPool -eq $true)
Once the storage pool creation completed, I overrode the media type:
Get-StorageSubsystem *cluster* | Get-PhysicalDisk | Where MediaType -eq "UnSpecified" | Set-PhysicalDisk -MediaType HDD
With the media type set, I created a volume:
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName VDisk01 -FileSystem CSVFS_REFS -Size 20GB
New-Volume automates the volume creation process, including formatting, adding it to cluster and make it a CSV:
Get-ClusterSharedVolume Name State Node ---- ----- ---- Cluster Virtual Disk (VDisk01) Online cj-vm2
Scale-Out File Server
With the volume in place, I installed the file server role and created a Scale-Out File Server:
icm $nodes {Install-WindowsFeature FS-FileServer} Add-ClusterScaleOutFileServerRole -Name cj-sofs
Once the Scale-Out File Server was created, I created a folder and a share:
New-Item -Path C:\ClusterStorage\Volume1\Data -ItemType Directory New-SmbShare -Name Share1 -Path C:\ClusterStorage\Volume1\Data -FullAccess contoso\clausjor
Verifying
On the domain controller I verified access by browsing to \\cj-sofs\share1 and storing a few files:
Conclusion
I hope I provided a good overview on how to stand up a Scale-Out File Server using shared-nothing storage with Storage Spaces Direct in a set of Azure virtual machines. We are working to make the experience simpler, so you would no longer need the media type work around. Let me know what you think.
Until next time
Claus