Quantcast
Channel: Storage at Microsoft
Viewing all 268 articles
Browse latest View live

What’s new in Storage Spaces Direct Technical Preview 5

$
0
0

Hello, Claus here again. We’ve got some more goodies for you in Storage Spaces Direct with the Windows Server Technical Preview 5 release. The key updates are:

  • Automatic Configuration
  • Managing Storage Spaces Direct using Virtual Machine Manager
  • Chassis and Rack Fault Tolerance
  • Deployment with 3 servers
  • Deployments with NVMe, SSD and HDD

Automatic Configuration

To make deployment simpler, we have integrated storage pool and storage tier creation into Enable-ClusterS2D (Enable-ClusterStorageSpacesDirect). Storage Spaces Direct will automatically create a single storage pool with all eligible disk devices. Storage Spaces Direct will also automatically create storage tiers reflecting the storage configuration of the system.

If Storage Spaces Direct finds disk devices that are not eligible, it will list these devices along with the description of why they were not eligible. Examples of ineligible disk devices could be devices without proper media type (must be SSD or HDD), without proper bus type (must be SATA, SAS or NVMe), or disk devices with existing partitions.

Managing Storage Spaces Direct using Virtual Machine Manager

In Technical Preview 5, we are introducing new simple workflows within System Center Virtual Machine Manager (VMM) to deploy new clusters or bring existing clusters with Storage Spaces Direct enabled under its management.

Let’s take the example of creating a new cluster with the Storage Spaces Direct capability enabled. This can be achieved by simply checking the “Enable Storage Spaces Direct” option as part of the Create Hyper-V Cluster wizard!

SCVMM

Under the covers, VMM automates the steps of installing the relevant Windows Server roles, run cluster validation, installation and configuration of failover clustering, and of course enable storage features. Once this is complete, the storage resources are available, and you can create a storage pool, carve out volumes, and deploy VMs on the cluster.

If you want to use the cluster as a Scale-Out File Server, for a disaggregated configuration, you can use the already existing workflows.

Note: In Technical Preview 5, VMM does not support automatic creation of storage pool as part of the Enable Storage Spaces Direct workflow.

Chassis and Rack Fault Tolerance

By default, Storage Spaces Direct is fault tolerant to server failures. In Windows Server Technical Preview 5 it becomes possible to make Storage Spaces Direct fault tolerant to chassis or rack failures. Once you configured the fault domains and their server membership, Storage Spaces Direct will use this information during initial data placement, data rebalancing and data repair to ensure that the data is resilient to chassis or rack failures.

The chassis fault domain is really useful for when you have multiple servers inside a chassis with shared infrastructure like power supplies or networking. When you have multiple chassis, you can make Storage Spaces Direct resilient to an entire chassis failing by configuring the chassis fault domains. Similarly, the rack fault domain is useful to protect against power loss of a rack or top-of-rack switch failures.

Deployments with 3 Servers

Starting Windows Server 2016 Technical Preview 5, Storage Spaces Direct can be used in smaller deployments with only 3 servers.

Deployments with fewer than four servers, support only mirrored resiliency. Parity resiliency or multi-resiliency are not possible, since these resiliency types require a minimum of four servers. With 2-copy mirror the deployment is resilient to 1 node or 1 disk failure, and with 3-copy mirror the deployment is resilient to 1 node or 2 disk failures.

 

3node

Deployments with NVMe, SSD and HDD

Previously Storage Spaces Direct used SSD + HDD and NVMe + SSD storage configurations. With Technical Preview 5 it is now possible to use three tiers of physical storage with NVMe, SSD and HDD. In this configuration the NVMe devices are used for caching and both the SSD and HDD are used for capacity. Storage Spaces Direct will automatically create a performance storage tier with mirror resiliency and SSD media type, as well as a capacity storage tier with parity resiliency and HDD media type. The user can create volumes purely from performance tier for best performance, purely from capacity tier for best efficiency and from both performance and capacity for balanced performance and capacity.

 

3tier

Storage Spaces Direct also enables all SSD storage configurations using SSD with good write endurance as caching devices, and read-optimized SSD for capacity devices.

With these enhancements, Storage Spaces Direct can be used in the following storage configurations:

  • SSD + HDD
  • NVMe + HDD
  • NVMe + SSD
  • SSD + SSD
  • NVMe + SSD + HDD

 

I am really excited about these new capabilities for Storage Spaces Direct in Windows Server 2016 Technical Preview 5. I hope you are too. Let me know what you think.

/Claus Joergensen


NVMe, SSD and HDD storage configurations in Storage Spaces Direct TP5

$
0
0

Hello, Claus here again. This time we’ll take a look at the work we’ve done to support systems with three tiers of storage (NVMe + SSD + HDD) in Storage Spaces Direct (S2D).

If you’re familiar with Storage Spaces in Windows Server 2012/R2, it’s important to understand that both caching and storage tiering works very differently in Storage Spaces Direct. The cache is independent of the storage pool and virtual disks (volumes), and the system manages it automatically. You don’t specify any cache as part of creating a volume. Storage tiering is now real-time and is used for both the way data is written (all storage configurations), as well as the media the data is written to (NVMe + SSD + HDD systems only).

SSD + HDD

To help explain how Storage Spaces Direct works in a NVMe + SSD + HDD storage configuration, let’s take a look at the common SSD + HDD storage configuration. In this storage configuration, Storage Spaces Direct will automatically use the SSD devices for cache and the HDD devices for capacity (see diagram below).

2tier

Figure 1 – Storage Spaces Direct with SSD and HDD

In this storage configuration, Storage Spaces Direct automatically creates two storage tiers, respectively named performance and capacity. The difference is the way S2D writes data, with the performance tier optimized for IO performance (hot data) and the capacity tier optimized for storage efficiency (cold data).

NVMe + SSD + HDD

In a NVMe + SSD + HDD storage configuration, Storage Spaces Direct automatically uses the NVMe device for cache and both the SSD and HDD devices for capacity (see diagram below)

3tier

Figure 2 – Storage Spaces Direct with NVMe, SSD and HDD

In this storage configuration, Storage Spaces Direct will also automatically create two storage tiers, respectively named performance and capacity. However, in this configuration the difference is twofold, a) the way data is written, and b) the media type (SSD or HDD). The performance tier is optimized for hot data and stored on SSD devices. The capacity tier is optimized for cold data and stored on HDD devices.

Exploring an NVMe + SSD + HDD system

Here I have four servers running Windows Server 2016 Technical Preview 5, with Storage Spaces Direct already enabled (see this blog post for more detail).

Let’s find out what disk devices, by bus type, I have in this cluster:

Get-StorageSubSystem Clu* | Get-PhysicalDisk | Group-Object BusType | FT Count, Name
Count Name
----- ----
64 SAS
8 NVMe

The above shows that I have 8 NVMe devices (2 per node), so Storage Spaces Direct automatically uses these devices for cache. What about the remaining disks?

Get-StorageSubSystem Clu* | Get-PhysicalDisk | ? BusType -eq "SAS" | Group-Object MediaType | FT Count, Name
Count Name
----- ----
48 HDD
16 SSD

The above shows that I have 16 SSD devices and 48 HDD devices, and Storage Spaces Direct automatically uses these devices for capacity.

All in all, I have a system with NVMe devices used for cache, and a combination of SSD and HDD devices used for data. So what storage tiers did Storage Spaces Direct automatically create on this configuration?

Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosize
FriendlyName ResiliencySettingName MediaType PhysicalDiskRedundancy
------------ --------------------- --------- ----------------------
Capacity     Parity                HDD                            2
Performance  Mirror                SSD                            2

I have two tiers: a performance tier with mirror resiliency (hot data) on SSD devices, and a capacity tier with parity resiliency (cold data) on HDDs. Both tiers will tolerate to two failures (disk or node).

This provides much more flexibility, as you can create volumes consisting only of performance tier, which provide highest IO performance and is backed by fast flash storage (cheapest IOPS). You can create volumes consisting only of capacity tier, which provides the best storage efficiency and is backed by hard drives (cheapest capacity). And you can create volumes consisting of both performance and capacity, that automatically keep the hottest data on flash storage and the coldest data on hard drive storage.

Volume types

Let’s go ahead and create a volume using just the performance tier, a volume using just the capacity tier, and a volume using both performance and capacity tier. I will create each volume with 1,000GB of usable capacity:

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName SQLServer -FileSystem CSVFS_REFS -StorageTierFriendlyName Performance -StorageTierSizes 1000GB
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName Archive -FileSystem CSVFS_REFS -StorageTierFriendlyName Capacity -StorageTierSizes 1000GB
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName VM -FileSystem CSVFS_REFS -StorageTierFriendlyName Performance, Capacity -StorageTierSizes 100GB, 900GB

Let’s take a look at how much raw storage each of these volume consumed:

Get-VirtualDisk | FT FriendlyName, Size, FootprintOnPool -autosize
FriendlyName          Size FootprintOnPool
------------          ---- ---------------
SQLServer    1073741824000   3221225472000
Archive      1073741824000   2148288954368
VM           1073741824000   2255663136768

Even though they’re all the same size, they consume different amounts of raw storage. Let’s calculate the storage efficiency for each of the volumes:

Get-VirtualDisk | ForEach {$_.FriendlyName + " " + [Math]::Round(($_.Size/$_.FootprintOnPool),2)}
SQLServer 0.33
Archive 0.5
VM 0.48

The storage efficiency for the “SQLServer” volume is 33%, which makes sense since it is created from the performance tier, which is 3-copy mirror on SSD. The storage efficiency for the “Archive” volume is 50%, which makes sense since it is created from the capacity tier, which is LRC erasure coding on HDD tolerant to two failures. I will dig further into LRC erasure coding in a future blog post, including explain storage efficiency with various layouts. Finally, the storage efficiency for the “VM” volume is 48%, which is the resulting efficiency from 100GB of performance tier (33% efficiency) and 900GB of capacity tier (50% efficiency).

Finally, lets us take a look at the storage tiers that make up each of the volumes:

Get-VirtualDisk | Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, Size -autosize
FriendlyName          ResiliencySettingName MediaType          Size
------------          --------------------- ---------          ----
Archive_Capacity      Parity                HDD       1073741824000
SQLServer_Performance Mirror                SSD       1073741824000
VM_Performance        Mirror                SSD        107374182400
VM_Capacity           Parity                HDD        966367641600

You can see that the FriendlyName is a combination of the friendly name for the volume name and the friendly name of the storage tier that contributed storage.

Choosing volume types

You may wonder why there are different volume types and when to use what type. This table should help:

Mirror Parity Multi-Resilient
Optimized for Performance Efficiency Balanced performance and efficiency
Use case All data is hot All data is cold Mix of hot and cold data
Efficiency Least (33%) Most (50+%) Medium (~50%)
File System ReFS or NTFS ReFS or NTFS ReFS only
Minimum nodes 3+ 4+ 4+

 

You should use mirror volumes for the absolute best storage performance and when all the data on the volume is hot, e.g. SQL Server OLTP. You should use parity volumes for the absolute best storage efficiency and when all the data on the volume is cold, e.g. backup. You should use multi-resilient volumes for balance performance/efficiency and when you have a mix of hot and cold data on the same volume, e.g. virtual machines. ReFS Real-Time Tiering will automatically tier the data inline between the mirror and parity portions of the multi-resilient volume for best read/write performance of the hot data and best storage efficiency of the cold data.

I am excited about the work we’ve done to support NVMe + SSD + HDD storage configurations. Let me know what you think.

– Claus Joergensen

Automatic Configuration in Storage Spaces Direct TP5

$
0
0

Hello, Claus here again. This time we’ll take a look at the work done around simplifying the deployment of Storage Spaces Direct (S2D).

If you have been following along, you know that once you form a cluster you would 1) enable S2D, 2) create a storage pool, 3) define storage tiers, and 4) create volumes for your virtual machines or file shares. In Windows Server 2016 Technical Preview 5, enabling S2D will automatically create the storage pool and define the storage tiers for you.

In addition, we have enhanced our ability to detect the storage available, and now automatically configure cache in more storage configurations, including:

  • SATA SSD + SATA HDD
  • NVMe SSD + SATA HDD
  • NVMe SSD + SATA SSD (all-flash)
  • NVMe SSD + SATA SSD + SATA HDD (3 tiers of storage)

Let’s take a look.

I have four servers running Windows Server 2016 Technical Preview 5 with the Failover Clustering feature installed. The first step is to form the cluster:

New-Cluster -Name S2D-CLU -Node "43B20-38", "43B20-36", "43B20-34", "43B20-32" -NoStorage

The next step is to enable Storage Spaces Direct:

Enable-ClusterS2D

Once the command completes, you can examine the results. First, let’s look at the automatically created storage pool:

Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosize
FriendlyName   FaultDomainAwarenessDefault OperationalStatus HealthStatus
------------   --------------------------- ----------------- ------------
S2D on S2D-CLU StorageScaleUnit            OK                Healthy

The pool naming logic is [S2D on <ClusterName>]. In this particular deployment, the pool name becomes “S2D on S2D-CLU”. Let’s also take a look at the automatically-created storage tiers:

Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosize
FriendlyName ResiliencySettingName MediaType PhysicalDiskRedundancy
------------ --------------------- --------- ----------------------
Capacity     Parity                HDD                            2
Performance  Mirror                SSD                            2

Storage Spaces Direct automatically created two storage tiers: a mirror tier and a capacity tier. The storage tier details depend on the storage devices in the system, and thus vary from system to system. All you have to do now is create the volumes you need for your virtual machines or file shares. With the two storage tiers automatically created by Storage Spaces Direct, you can create a) mirror volumes, b) parity volumes and c) multi-resilient volumes. Let’s create a multi-resilient volume:

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName MRV -FileSystem CSVFS_REFS -StorageTierFriendlyName Performance, Capacity -StorageTierSizes 100GB, 900GB

I am excited about the work we have done to simplify deploying Storage Spaces Direct. It is now merely 1) enable Storage Spaces Direct, and 2) start creating volumes for your virtual machines or file shares. Let me know what you think.

– Claus Joergensen

Building Work Folders Sync Reports

$
0
0

Hi, I’m Manish Duggal, software engineer at Microsoft. Work Folders enables users to securely access their files on a Windows Server from Windows, iOS and Android devices. Enterprise IT admins often want to understand Work Folders usage, categorized by active users and type of devices syncing to Work Folders servers.

Overview

In this example, I created three sync shares, for the Finance, HR and Engineering departments. Each sync share is assigned with a unique security group: FinanceSG, HRSG and EngineeringSG.

The scripts below enumerates the users associated with each sync share, and outputs the user sync status to a CSV file, so that you can build charts like:

  1. The device types used for Work Folders.
  2. Work Folders devices owned per user.

The script uses cmdlets from following PowerShell modules:

  1. Work Folders module (SyncShare)
    1. Get-SyncShare cmdlet
    2. Get-SyncUserStatus cmdlet
  2. Active Directory module (ActiveDirectory)
    1. Get-ADGroupMember cmdlet

The general flow is:

  1. Enumerate all the sync shares on the Work Folders server
  2. Enumerate the users in the security groups which associate with each sync share
  3. Output user sync status to a CSV file

Helper Functions

Three helper functions in the PowerShell script, plus an array declared for user objects:

# get domain name from domain\group (or domain\user) format
function GetDomainName([string] $GroupUserName)
{
    $pos = $GroupUserName.IndexOf("\");
    if($pos -ge 0)
    {
        $DomainName = $GroupUserName.Substring(0,$pos);
    }
    return $DomainName;
}
# get group (or user) only detail from domain\group format
function GetGroupUserName([string] $GroupUserName)
{
    $pos = $GroupUserName.IndexOf("\");
    if($pos -ge 0)
    {
        $GroupUserName = $GroupUserName.Substring($pos+1);
    }
    return $GroupUserName;
}
# Object with User name, Sync share name and Device details
function SetUserDetail([string] $userName, [string] $syncShareName, [string] $deviceName, [string] $deviceOs)
{
    #set the object for collection purpose
    $userObj = New-Object psobject
    Add-Member -InputObject $userObj -MemberType NoteProperty -Name UserName -Value ""
    Add-Member -InputObject $userObj -MemberType NoteProperty -Name SyncShareName -Value ""
    Add-Member -InputObject $userObj -MemberType NoteProperty -Name DeviceName -Value ""
    Add-Member -InputObject $userObj -MemberType NoteProperty -Name DeviceOs -Value ""
    $userObj.UserName = $userName
    $userObj.SyncShareName = $syncShareName
    $userObj.DeviceName = $deviceName
    $userObj.DeviceOs = $deviceOs
    return $userObj
}
#collection
$userCollection=@()

Enumerate the Sync shares on a Work Folders server

To enumerate the available sync shares, run the Get-SyncShare cmdlet:

$syncShares = Get-SyncShare

The $syncShares variable is collection of Sync share objects, like Finance, HR and Engineering

Getting the users for each sync share

To find the users associated with each sync share, first retrieve the security groups associated with the sync share and then get all users in each of the security group:

foreach ($syncShare in $syncShares)
{
    $syncShareSGs = $syncShare.User;
    foreach ($syncShareSG in $syncShareSGs)
    {           
     $domainName = GetDomainName $syncShareSG
     $sgName = GetGroupUserName $syncShareSG
     $sgUsers = Get-ADGroupMember -identity $sgName | select SamAccountName
  //find Work Folders devices syncing for each user as per logic in next section
    }
 }

With every iteration, $syncShareSGs detail is value of department’s security group name. For the Finance department, it is FinanceSG. $sgUsers detail is list of SamAccountName for the members of department’s security group.

Enumerate the devices synced with the Work Folders server

To find out user devices syncing with Work Folders server, run Get-SyncUserStatus cmdlet for each user and add the needed details into user object array:

foreach ($sgUser in $sgUsers)
{
#get user detail in domain\user format
$domainUser = [System.String]::Join("\",$domainName,$sgUser.SamAccountName);
#invoke the Get-SyncUserStatus
       $syncUserStatusList = Get-SyncUserStatus -User $domainUser -SyncShare $syncShareName
#user may have more than one device
       foreach ($syncUserStatus in $syncUserStatusList)
       {
        #set the user detail and add to list
        $resultObj = SetUserDetail $domainUser $syncShareName $syncUserStatus.DeviceName $syncUserStatus.DeviceOs
        #add to the user collection
        $userCollection += $resultObj
       }
 }

With every iteration, $syncUserStatusList detail is collection of devices owned by a user. $syncUserStatus is the detail for one device from user’s device collection. This detail is added to the array of user objects.

Export the user and device details to CSV

$userCollection | Export-Csv -Path Report.csv -Force

$userCollection comprises of list of all users associated with Finance, HR and Engineering sync shares and devices syncing with Work Folders server. That collection helps generate a simple CSV report. Here is a sample report generated for the above sync shares

UserName SyncShareName DeviceName DeviceOs
Contoso\EnggUser1 Engineering User1-Desktop Windows 6.3
Contoso\EnggUser1 Engineering User1Phone iOS 8.3
Contoso\EnggUser1 Engineering User1-Laptop Windows 10.0
Contoso\FinanceUser1 Finance Finance-Main1 Windows 10.0
Contoso\FinanceUser1 Finance Finance-Branch Windows 6.3
Contoso\HRUser2 HR HR-US Windows 10.0
Contoso\HRUser2 HR iPad-Mini1 iOS 8.0

Summary

You just learned how easy it is build simple tabular report for understanding the Work Folders usage trend in an enterprise. This data could be leveraged further to build interesting graphs such as active usage in enterprise and/or per department sync share as well as weekly and monthly trends for Work Folders adoption in enterprise, thx to available graph generation support in Microsoft Excel.

Storage Spaces Direct in Azure (TP5)

$
0
0

Hello, Claus here again. Enjoying the great weather in Washington and enjoying the view from my office, I thought I would share some notes about standing up Storage Spaces Direct using virtual machines in Azure and create a shared-nothing Scale-Out File Server. This scenario is not supported for production workloads, but might be useful for dev/test.

IMG_1379

Using the Azure portal, I:

  • Created four virtual machines
    • 1x DC named cj-dc
    • 3x storage nodes named cj-vm1, cj-vm2 and cj-vm3
  • Created and attached a 128GB premium data disk to each of the storage nodes

I used DS1 virtual machines and the Windows Server 2016 TP5 template.

Domain Controller

I promoted the domain controller with domain name contoso.com. Once the domain controller setup finished, I changed the Azure virtual network configuration to use ‘Custom DNS’, with the IP address of the domain controller (see picture below).

CustomDNS

 

 

I restarted the virtual machines to pick up this change. With the DNS server configured I joined all 3 virtual machines to the domain.

Failover Clustering

Next I needed to form a failover cluster. I ran the following to install the Failover Clustering feature on all the nodes:

 

$nodes = ("CJ-VM1", "CJ-VM2", "CJ-VM3")
icm $nodes {Install-WindowsFeature Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools}

With the feature installed, I formed a cluster:

New-Cluster -Name CJ-CLU -Node $nodes –NoStorage –StaticAddress 10.0.0.10

Storage Spaces Direct

With a functioning cluster, I looked at the attached disks:

Get-PhysicalDisk | ? CanPool -EQ 1 | FT FriendlyName, BusType, MediaType, Size
FriendlyName     BusType MediaType           Size
------------     ------- ---------           ----
Msft Virtual Disk SAS     UnSpecified 137438953472
Msft Virtual Disk SAS     UnSpecified 137438953472
Msft Virtual Disk SAS     UnSpecified 137438953472

Storage Spaces Direct uses BusType and MediaType to automatically configure caching, storage pool and storage tiering. In Azure virtual machines (as in Hyper-V virtual machines), the media type is reported as unspecified. To work around this, I enabled Storage Spaces Direct with some additional parameters:

Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 -SkipEligibilityChecks

I disabled caching since I don’t have any caching devices. I turned off automatic configuration and I skipped eligibility checks, both to work around the media type issue. This meant manually configuring the storage pool. With Storage Spaces Direct enabled, I manually created the storage pool:

New-StoragePool -StorageSubSystemFriendlyName *Cluster* -FriendlyName S2D -ProvisioningTypeDefault Fixed -PhysicalDisk (Get-PhysicalDisk | ? CanPool -eq $true)

Once the storage pool creation completed, I overrode the media type:

Get-StorageSubsystem *cluster* | Get-PhysicalDisk | Where MediaType -eq "UnSpecified" | Set-PhysicalDisk -MediaType HDD

With the media type set, I created a volume:

 New-Volume -StoragePoolFriendlyName S2D* -FriendlyName VDisk01 -FileSystem CSVFS_REFS -Size 20GB

New-Volume automates the volume creation process, including formatting, adding it to cluster and make it a CSV:

Get-ClusterSharedVolume
Name                           State Node
----                           ----- ----
Cluster Virtual Disk (VDisk01) Online cj-vm2

Scale-Out File Server

With the volume in place, I installed the file server role and created a Scale-Out File Server:

icm $nodes {Install-WindowsFeature FS-FileServer}
Add-ClusterScaleOutFileServerRole -Name cj-sofs

Once the Scale-Out File Server was created, I created a folder and a share:

New-Item -Path C:\ClusterStorage\Volume1\Data -ItemType Directory
New-SmbShare -Name Share1 -Path C:\ClusterStorage\Volume1\Data -FullAccess contoso\clausjor

Verifying

On the domain controller I verified access by browsing to \\cj-sofs\share1 and storing a few files:

Access

 

Conclusion

I hope I provided a good overview on how to stand up a Scale-Out File Server using shared-nothing storage with Storage Spaces Direct in a set of Azure virtual machines. We are working to make the experience simpler, so you would no longer need the media type work around. Let me know what you think.

Until next time

Claus

 

 

 

 

What’s new in Storage Replica for Windows Server 2016 Technical Preview 5

$
0
0

Hiya folks, Ned here again. Windows Server 2016 Technical Preview 5’s release brings you a number of new Storage Replica features, some of which we added directly from your feedback during the preview loop:

  • Asynchronous stretch clusters now supported
  • RoCE V2 RDMA networks now supported
  • Network Constraint
  • Integrated with the Cluster Health service
  • Delegation
  • Thin provisioned storage now supported
  • Fixes aplenty

As you recall, Storage Replica offers new disaster recovery and preparedness capabilities in Windows Server 2016 Technical Preview. For the first time, Windows Server offers the peace of mind of zero data loss, with the ability to synchronously protect data on different racks, floors, buildings, campuses, counties, and cities. After a disaster strikes, all data will exist elsewhere without any possibility of loss. The same applies before a disaster strikes; Storage Replica offers you the ability to switch workloads to safe locations prior to catastrophes when granted a few moments warning – again, with no data loss. It supports three scenarios in TP5: stretch cluster, cluster-to-cluster, and server-to-server.

image       image       image

Sharp-eyed readers might notice a certain similarity to Claus’ “What’s new in Storage Spaces Direct Technical Preview 5” blog post. Does my content directly rip off its style and flow, making me a thief and a cheater?

Yes. Yes, it does.

Asynchronous stretch clusters now supported

You can now configure stretch clusters over very high latency and lower bandwidth networks. This means all three SR scenarios support both synchronous and asynchronous now. By default, all use synchronous unless you say otherwise. And we even included a nice wizard option for those using Failover Cluster Manager:

image

RoCE V2 RDMA networks now supported

You are now free to use RDMA over Converged Ethernet (RoCE) V2 in your deployments, joining iWARP and InfiniBand + Metro-X as supported network platforms. Naturally, plain old TCP/IP is still fine. Naturally, you will probably not be replicating great distances with RoCE due to its inherent nature, but datacenter and campus-wide is very attainable.

You won’t have to do anything in SR to make this work at RTM*, it just happens by getting RoCEv2 to work, just like the other protocols. This is the beauty of SMB 3, the transport protocol for SR – if it finds working RDMA paths, it uses them along with multichannel, for blazing low latency, high throughput perf. You don’t have storage fast enough to fill SMB Direct. Yet…

* In TP5 multichannel needs a little nudge

Network Constraint

You asked for it, you got it: you can now control which networks SR runs upon, based on network interface. We even support doing this per replication group, if you are a mad scientist who wants to replicate certain volume on certain networks.

Usage is simple – get the replication group and network info on each server or cluster:

Get-SRPartnership

 

Get-NetIPConfiguration

Then run:

Set-SRNetworkConstraint -SourceComputerName <hi> -SourceRGName <there> -SourceNWInterfaceIndex <7> -DestinationComputerName <you> - -DestinationRGName <guys> DestinationNWInterfaceIndex <4>

Integrated with the Cluster Health service

Windows Server 2016 TP5 contains a brand new mechanism for monitoring clustered storage health, with the imaginative name of “Health Service”. What can I say; I was not involved. Anyhoo, the Health Service improves the day-to-day monitoring, operations, and maintenance experience of Storage Replica, Storage Spaces Direct, and the cluster. You get metrics, faults, actions, automation, and quarantine. This is not turned on by default in TP5 for mainline SR scenarios yet, I just want you knowing about it for the future.

Delegation

How do you feel about adding users to the built-in administrators group on your servers? Hopefully queasy. Storage Replica implements a new built-in group for users to administer replication, with all the necessary ACLs in our service, and driver, and registry. By adding the user to this group and Remote Management Users, they now have the power to replicate and remotely manage servers – but nothing else.

image

And just to make it easy, we gave you the Grant-SRDelegation cmdlet. I hate typing.

image

Thin provisioned storage now supported

By popular demand, we also added replication support for thin-provisioned storage. You can now use SAN, non-clustered Storage Spaces, and dynamic Hyper-V disk for thin-provisioned storage with Storage Replica. This means the initial sync of new volumes is nearly instantaneous.

Don’t believe me? Ok:

(Click here for other video options)

Did you blink and miss it? Initial sync completed in less than a second.

Fixes aplenty

Finally, there have plenty of fixes and tweaks added, and more to come. For instance, PowerShell has been improved – look for the
-IgnoreFailures parameter when you are trying to perform a direction switch or replication tear down and a node is truly never coming back.

This is a continual improvement process and we very much want to hear your feedback and your bugs. Email srfeed@microsoft.com and you will come right into my team’s inbox. You can also file feedback at our UserVoice forum. As always, the details, guides, known issues, and FAQ are all at https://aka.ms/storagereplica.

Until next time,

– Ned “getting close” Pyle

Storage throughput with Storage Spaces Direct (TP5)

$
0
0

Hello, Claus here again. Dan and I recently received some Samsung PM1725 NVMe devices. These have 8 PCIe lanes, so we thought we would put them in a system with 100Gbps network adapters and see how fast we could make this thing go.

We used a 4-node Dell R730XD configuration attached to a 32 port Arista DCS-7060CX-32S 100Gb switch, running EOS version 4.15.3FX-7060X.1. Each node was equipped with the following hardware:

  • 2x Xeon E5-2660v3 2.6Ghz (10c20t)
  • 256GB DRAM (16x 16GB DDR4 2133 MHz DIMM)
  • 4x Samsung PM1725 3.2TB NVME SSD (PCIe 3.0 x8 AIC)
  • Dell HBA330
    • 4x Intel S3710 800GB SATA SSD
    • 12x Seagate 4TB Enterprise Capacity 3.5” SATA HDD
  • 2x Mellanox ConnectX-4 100Gb (Dual Port 100Gb PCIe 3.0 x16)
    • Mellanox FW v. 12.14.2036
    • Mellanox ConnectX-4 Driver v. 1.35.14894
    • Device PSID MT_2150110033
    • Single port connected / adapter

Using VMFleet we stood up 20 virtual machines per node, for a total of 80 virtual machines. Each virtual machine was configured with 1vCPU. We then used VMFleet to run DISKSPD in each of the 80 virtual machines with 1 thread, 512KiB sequential read with 4 outstanding IO.

throughput

As you can see from the above screenshot, we were able to hit over 60GB/s in aggregate throughput into the virtual machines. Compare this to the total size of English Wikipedia article text (11.5GiB compressed), which means reading it at about 5 times per second.

The throughput was on average ~750MB/s for each virtual machine, which is about a CD per second.

You can find a recorded video of the performance run here:

We often speak about performance in IOPS, which is important to many workloads. For other workloads like big data, data warehouse and similar, throughput can be an important metric. We hope that we demonstrated the technical capabilities of Storage Spaces Direct when using high throughput hardware like the Mellanox 100Gbps NICs, the Samsung NVMe devices with 8 PCIe lanes and the Arista 100Gbps network switch.

Let us know what you think.

Dan & Claus

Storage Replica quick poll (UPDATE: POLL IS CLOSED)

$
0
0

Hi folks, Ned here again. I am running a Storage Replica topology poll on our UserVoice site to help decide some work priority. The two topics you are voting to prioritize are Storage Replica one-to-many replication & Storage Replica transitive replication. Please let us know which is more important to you.

Poll is closed, thanks  for voting, everyone (the UserVoice SmartVote polls aggressively shut themselves down when they feel there’s a run away :) ). Plenty more polls to come.

Until next time,

– Ned “also wanted to see if this new poll widget works” Pyle


Quick Survey: Your plans for WS 2016 block replication and Azure

$
0
0
Heya folks, Ned here again with a quick (only 4 questions) survey on how you plan to use block replication, Storage Replica, and Azure in the coming months after RTM of Windows Server 2016. Any feedback is highly appreciated. https://microsoft.qualtrics.com/SE/?SID=SV_0U6b2tbhVmaImnX Thanks!... Read more

Quick Survey: Windows File Server Usage and Pain Points

$
0
0

Hi all,

We need your input to help us prioritize our future investments for File Server scenarios. We’ve created a short 5 question survey to better understand File Server usage and pain points. Any feedback is appreciated.

https://www.surveymonkey.com/r/C3MFT6Q

Thanks,

Jeff

Azure Active Directory Enterprise State Roaming for Windows 10 is now generally available

$
0
0

Hi folks, Ned here again with a quick public service announcement from colleagues:

Azure Active Directory Enterprise State Roaming for Windows 10 is now generally available. With Enterprise State Roaming, customers benefit from enhanced security and control of the OS and app state data that are roamed between enterprise-owned devices.

All synced data is encrypted before leaving the devices and enterprise users can use Azure AD identities to roam their settings and modern app data using Azure cloud for storage.  This enables enterprises to maintain control and have better visibility over their data.  With Enterprise State Roaming, users no longer have to use consumer Microsoft accounts to roam settings between work-owned devices, and their data is no longer stored in the personal OneDrive cloud.”

To learn more, visit the Enterprise Mobility & Security Blog.

You should check it out, especially since this is a feature you asked us to make. Well, maybe not you personally.

Until next time,

– Ned “cross post” Pyle

Storage IOPS update with Storage Spaces Direct

$
0
0

Hello, Claus here again. I played one of my best rounds of golf in a while at the beautiful TPC Snoqualmie Ridge yesterday. While golf is about how low can you go, I want to give an update on how high can you go with Storage Spaces Direct.

Once again, Dan and I used a 16-node rig attached to a 32 port Cisco 3132 switch. Each node was equipped with the following hardware:

  • 2x Xeon E5-2699v4 2.3Ghz (22c44t)
  • 128GB DRAM
  • 4x 800GB Intel P3700 NVMe (PCIe 3.0 x4)
  • 1x LSI 9300 8i
  • 20x 1.2TB Intel S3610 SATA SSD
  • 1x Chelsio 40GbE iWARP T580-CR (Dual Port 40Gb PCIe 3.0 x8)

Using VMFleet we stood up 44 virtual machines per node, for a total of 704 virtual machines. Each virtual machine was configured with 1vCPU. We then used VMFleet to run DISKSPD in each of the virtual machines with 1 thread, 4KiB random read with 32 outstanding IO.

IOPS

As you can see from the above screenshot, we were able to hit ~5M IOPS in aggregate into the virtual machines. This is ~7,000 IOPS per virtual machine!

We are not done yet…. If you are attending Microsoft Ignite, please stop by my session “BRK3088 Discover Storage Spaces Direct, the ultimate software-defined storage for Hyper-V” and say hello.

Let us know what you think.

Dan & Claus

Offline Files (CSC) to Work Folders Migration Guide

$
0
0

Hi all,

I’m Jeff Patterson, Program Manager for Work Folders and Offline Files.

Jane wrote a blog last year which covers how to use Folder Redirection with Work Folders. The blog is great for new environments. If Folder Redirection and Offline Files are currently used, there are some additional steps that need to be performed which are covered in this migration guide.

Overview

This blog covers migrating from Offline Files (a.k.a. Client Side Caching) to Work Folders. This guidance is specific to environments that are using Folder Redirection and Offline Files and the user data is stored on a Windows Server 2012 R2 file server.

When using Folder Redirection and Offline Files, the user data in special folders (e.g., Documents, Favorites, etc.) is stored on a file server. The user data is cached locally on the client machine via Offline Files so it’s accessible when the user is working offline.

Folder Redirection policy with Offline Files
Folder Redirection policy with Offline Files

After migrating to Work Folders, the user data in special folders (e.g., Documents, Favorites, etc.) is stored locally on the client machine. The Work Folders client synchronizes the user data to the file server.

Folder Redirection policy with Work Folders
Folder Redirection policy with Work Folders

After migration, the user experience will remain unchanged and companies will benefit from the advantages of Work Folders.

Why Migrate?

Reasons to migrate from Offline Files to Work Folders:

  • Modern file sync solution that was introduced in Windows Server 2012 R2
  • Supports security features to protect user data such as selective wipe, Windows Information Protection (WIP) and Rights Management Services (RMS)
  • Familiar file access experience for users, same as OneDrive and OneDrive for Business
  • User data can be accessed outside of the corporate network (VPN or DirectAccess is not required)
  • User data can be accessed using non-Windows devices: Android, iPhone and iPad
  • Future investments (new features) are focused on Work Folders

For the complete list of benefits, please reference the Work Folders Overview.

Supported Migration Scenarios

This migration guide is intended for the following configurations:

  • User data is hosted on a file server that is running Windows Server 2012 R2 or later
  • Windows clients are Windows 7, Windows 8.1 and Windows 10
  • Offline Files is used with Folder Redirection

Unsupported Migration Scenarios

The following configurations or scenarios are not currently supported:

Configuration or Scenario Reason
User data is stored on a network attached storage (NAS) device Work Folders requires the user data stored on the file server via direct attached storage (DAS), storage area network (SAN) or iSCSI
File server is running Windows Server 2012 or Windows Server 2008 R2 The Work Folders server component is only supported on Windows Sever 2012 R2 or later
Offline Files is used for multiple file shares (e.g., team shares) Work Folders supports one sync partnership. It is intended for user data only and does not support team shares or collaboration scenarios.

If the requirements listed above are not met, the recommendation is to continue to use Offline Files or evaluate using OneDrive for Business.

Overview of the Offline Files to Work Folders migration process

High-level overview of the Offline Files to Work Folders migration process:

  1. On the file server, install the Work Folders feature and configure the Work Folders Sync Share to use the existing file share used for Folder Redirection.
  2. Deploy Work Folders on the Windows clients via group policy.
  3. Update the existing Folder Redirection group policy to redirect the special folders (e.g., Documents, Desktop, etc.) to the local Work Folders directory on the client machine.
  4. Optional: Disable Offline Files on the Windows clients.

Planning the migration

The following considerations should be reviewed prior to starting the Offline Files to Work Folders migration:

  • Work Folders requirements and design considerations: https://technet.microsoft.com/en-us/library/dn265974(v=ws.11).aspx
  • Client disk space requirements: During the migration process, existing client machines will need additional disk space to temporarily store the user data using both Offline Files and Work Folders. Once the migration is complete, the user data stored in Offline Files will be deleted.
  • Network traffic: Migrating from Offline Files to Work Folders requires redirecting the special folders (e.g., Documents, Favorites, etc.) to the client machine. The user data that is currently stored on the file server will be synced to the Windows client using Work Folders. The migration should be done in phases to reduce network traffic. Please reference the performance considerations and network throttling blogs for additional guidance.
  • RDS and VDI: If users are accessing user data in Remote Desktop Services (RDS) or Virtual Desktop Infrastructure (VDI) environments, create a separate Folder Redirection group policy for RDS and VDI. Work Folders is not supported for RDS and is not recommended for VDI environments. The recommendation is to continue to redirect the special folders to the file server since the users should have a reliable connection.

Example – Create two Folder Redirection group policies:

Desktops and Laptops Folder Redirection group policy – The root path in the Folder Redirection policy will point to the local Work Folders directory: %systemdrive%\users\%username%\Work Folders\Documents

RDS and VDI Folder Redirection group policy – The root path in the Folder Redirection policy will point to the file server: \\fileserver1\userdata$\%username%\Documents

Note: The group policy loopback processing (replace mode) setting should be enabled on the RDS and VDI group policy
Note: Offline Files (CSC) should be disabled for RDS and VDI environments since the user should have a reliable connection to the file server

  • Existing Windows clients: If you do not want to migrate existing clients to Work Folders (only new clients), you can create separate Folder Redirection group policies as covered in the “RDS and VDI” section. The legacy clients will continue to access the user data on the file server. The new clients will access the user data locally and sync the data to the file server.

Migrating from Offline Files to Work Folders

To migrate from Offline Files to Work Folders, follow the steps below.

Note: If the root path in the Folder Redirection policy is \\fileserver1\userdata$, the steps below should be performed on the file server named FileServer1.

  1. On the Windows Server 2012 R2 file server, install and configure Work Folders by following steps 1-10 in the TechNet documentation.

Note: Several of the steps (6, 8, 9, 10) are optional. If you want to allow users to sync files over the internet and you plan to have multiple Work Folders servers, steps 1-10 should be completed.

Important details to review before following the TechNet documentation:

Obtain SSL certificates (Step #1 in the TechNet documentation)

The Work Folders Certificate Management blog provides additional info on using certificates with Work Folders.

Create DNS records (Step #2 in the TechNet documentation)

When Work Folders clients use auto discovery, the URL used to discover the Work Folders server is https://workfolders.domain.com. If you plan to use auto discovery, create a CNAME record in DNS named workfolders which resolves to the FDQN of the Work Folders server.

DNS Cname

Install Work Folders on file servers (Step #3 in the TechNet documentation)

If the existing file server is clustered, the Work Folders feature must be installed on each cluster node. For more details, please refer to the following blog.

Create sync shares for user data (Step #7 in the TechNet documentation)

When creating the sync share, select the existing file share that is used for the user data.

Example: If the special folders path in the Folder Redirection policy is \\fileserver1\userdata$, the userdata$ file share should be selected as the path.

File Share Path

Note: All user data stored on the file share will be synced to the client machine. If this path is used to store user data in addition to the redirected special folders (e.g., home drive), that user data will also be synced to the client machine.

When specifying the user folder structure, select “User alias” to maintain compatibility with the Folder Redirection folder structure.

User Folder Structure

If you select the “Automatically lock screen, and require a password” security policy, the user must be an administrator on the local machine or the policy will fail to apply. To exclude this setting from applying to domain join machines, use the Set-SyncShare -PasswordAutolockExcludeDomain cmdlet (see TechNet content for more info).

Automatically lock screen

  1. Deploy the Work Folders client using group policy

To deploy Work Folders via group policy, follow Step #11 in the TechNet documentation.

For the “Work Folders URL” setting in the group policy, the recommendation is to use the discovery URL (e.g., https://workfolders.domain.com) so you don’t have to update the group policy if the Work Folders server changes.

Work Folders Group Policy

Note: Using the discovery URL requires the “workfolders” CNAME record in DNS that is covered in the “Create DNS records” section.

  1. Update the existing Folder Redirection group policy to redirect the special folders to the local Work Folders directory on the client machine

Note: All special folders (e.g., Documents, Desktop, Favorites, etc.) can be redirected to the local Work Folders directory except for AppData (Roaming). Redirecting this folder can lead to conflicts and files that fail to sync due to open handles. The data stored in the AppData\Roaming folder should be roamed using Enterprise State Roaming (ESR), UE-V or Roaming User Profiles.

To update the Folder Redirection policy, perform the following steps:

  1. Open the existing Folder Redirection group policy
  2. Right-click on a special folder (e.g., Documents) that’s currently redirected to a file share and choose properties

    Folder Redirection policy with Offline Files

  3. Change the Target folder location setting to: Redirect to the following location
  4. Change the Root Path to: %systemdrive%\users\%username%\Work Folders\Documents

    Folder Redirection policy with Work Folders

  5. Click the Settings tab and un-check the “Move the contents of Documents to the new location” setting.Move the contentsNote: The “Move the contents of Documents to the new location” setting should be un-checked because Work Folders will sync the user data to the client machine. Leaving this setting checked for existing clients will cause additional network traffic and possible file conflicts.
  1. Click OK to save the settings and click Yes for the Warning messages.
  2. Repeat steps (1-6) for each special folder that needs to be redirected
  1. Optional: Disable Offline Files on the Windows clients

After migrating to Work Folders, you can prevent clients from using Offline Files by setting the “Allow or Disallow use of the Offline Files feature” group policy setting to Disabled.

Disallow use of offline files

Note: Offline Files should remain enabled if using BranchCache in your environment.

Validate the migration

Verify the Work Folders clients are syncing properly with the Work Folders server
  • To verify the Work Folders clients are syncing properly with the Work Folders server, review the Operational and Reporting event logs on the Work Folders server. The logs are located under Microsoft-Windows-SyncShare in Event Viewer.
  • On the Work Folders client, you can check the status by opening the Work Folders applet in the control panel:

Example: Healthy status
Healthy status

If the sync status is orange or red, review the error logged. If additional information is needed, review the Work Folders operational log which is located under Microsoft-Windows-WorkFolders in Event Viewer.

The “Troubleshooting Work Folders on Windows client” blog covers common issues.

Verify the special folders are redirected to the correct location
  1. Open File Explorer on a Windows client and access the properties of a special folder that’s redirected (e.g., Documents).
  2. Verify the folder location is under %systemdrive%\users\Work Folders

Documents Properties

If the special folder is still redirected to the file share, run “gpupdate /force” from a command prompt to update the policy on the client machine. The user will need to log off and log on for the changes to be applied.

Additional Information

Special folders that can be redirected via Folder Redirection policy

The following special folders can be redirected to the local Work Folders directory:

  • Contacts
  • Desktop
  • Documents
  • Downloads
  • Favorites
  • Links
  • Music
  • Pictures
  • Saved Games
  • Searches
  • Start Menu
  • Videos
Considerations for the Root Path in the Folder Redirection policy

The Folder Redirection root path in the migration guide (Step# 3) assumes multiple special folders are redirected. The root path can vary for each special folder as long as the folders are redirected under the Work Folders directory.

Example #1: If Documents is the only folder that is redirected, the root path could be the Work Folders root directory: %systemdrive%\users\%username%\Work Folders

Example #2: If you do not want the special folders in the root of the Work Folders directory, use a sub-directory in the path: %systemdrive%\users\%username%\Work Folders\Profile\Favorites

Known issues

The following issue has been identified when redirecting special folders to the Work Folders directory:

Folder Issue Cause Solution
Favorites Unable to open Favorites in Internet Explorer when using Windows Information Protection Internet Explorer does not support encrypted favorite files Use Edge or a 3rd party browser

Congratulations! You’ve now completed the Offline Files to Work Folders migration!

I would appreciate any feedback (add a comment) on the migration process and if any steps need to be clarified.

Thanks,

Jeff

Deep Dive: Volumes in Storage Spaces Direct

$
0
0

Cosmos Darwin

This kid has been at Microsoft for one year!

Hi there! I’m Cosmos. I joined the High Availability & Storage PM team one year ago this week. I thought it was about time I should post my first blog. You can follow me on Twitter @cosmosdarwin.

Introduction

In Storage Spaces Direct, volumes derive their fault tolerance from mirroring, parity encoding, or both. We’ve taken to calling this last option mixed resiliency, or multi-resiliency, or sometimes “hybrid” resiliency, and it’s very exciting.

Briefly…

  • Mirroring is similar to distributed, software-defined RAID-1. It provides the fastest possible reads/writes, but isn’t very capacity efficient, because you’re effectively keeping full extra copies of everything. It’s best for actively written data, so-called “hot” data.
  • Parity is similar to distributed, software-defined RAID-5 or RAID-6. Our implementation includes several breakthrough advancements developed by Microsoft Research. Parity can achieve far greater capacity efficiency, but at the expense of computational work for each write. It’s best for infrequently written, so-called “cold” data.
  • Beginning in Windows Server 2016, one volume can be part mirror, part parity, and ReFS will automagically move data back and forth between these “tiers” in real-time depending on what’s hot and what’s not. This mixed resiliency gives the best of both – fast, cheap writes of hot data, and better efficiency for cooler data. As Claus says, what’s not to like!

Storage Efficiency

If that was too much, too fast, stay tuned for an upcoming blog post from Claus on this very subject.

Let’s See It

So, that’s the concept. How can you see all this in Windows? The Storage Management API is the answer, but unfortunately it’s not quite as straightforward as you might think. This blog aims to untangle the many objects and their properties, so we can get one comprehensive view, like this:

Screenshot

Volumes, their capacities, how they’re filling up, resiliency, footprints, efficiency, all in one easy view.

The first thing to understand is that in Storage Spaces Direct, every “volume” is really a little hamburger-like stack of objects. The Volume sits on a Partition; the Partition sits on a Disk; that Disk is a Virtual Disk, also commonly called a Storage Space.

Storage Management API Stack

Classes and their relationships in the Windows Storage Management API.

Let’s grab properties from several of these, to assemble the picture we want.

We can get the volumes in our system by launching PowerShell as Administrator and running Get-Volume. The key properties are the FileSystemLabel, which is how the volume shows up mounted in Windows (literally – the name of the folder), the FileSystemType, which shows us whether the volume is ReFS or NTFS, and the Size.

Get-Volume | Select FileSystemLabel, FileSystemType, Size

Given any volume, we can follow associations down the hamburger. For example, try this:

Get-Volume -FileSystemLabel <Choose One> | Get-Partition

Neat! Now, the Partition isn’t very interesting, and frankly, neither is the Disk, but following these associations is the safest way to get to the underlying VirtualDisk (the Storage Space!), which has many key properties we want.

$Volume = Get-Volume -FileSystemLabel <Choose One>
$Partition = $Volume | Get-Partition
$Disk = $Partition | Get-Disk
$VirtualDisk = $Disk | Get-VirtualDisk

Voila! (We speak Français in Canada.) Now we have the VirtualDisk underneath our chosen Volume, saved as $VirtualDisk. You could shortcut this whole process and just run Get-VirtualDisk, but theoretically you can’t be sure which one is under which Volume.

We now get to deal with two cases.

Case One: No Tiers

If the VirtualDisk is not tiered, which is to say it uses mirror or parity, but not both, and it was created without referencing any StorageTier (more on these later), then it has several key properties.

  • First, its ResiliencySettingName will be either Mirror or Parity.
  • Next, its PhysicalDiskRedundancy will either be 1 or 2. This lets us distinguish between what we call “two-way mirror” versus “three-way mirror”, or “single parity” versus “dual parity” (erasure coding).
  • Finally, its FootprintOnPool tells us how much physical capacity is occupied by this Space, once the resiliency is accounted for. The VirtualDisk also has its own Size property, but this will be identical to that of the Volume, plus or minus some modest metadata.

Check it out!

$VirtualDisk | Select FriendlyName, ResiliencySettingName, PhysicalDiskRedundancy, Size, FootprintOnPool

If we divide the Size by the FootprintOnPool, we obtain the storage efficiency. For example, if some Volume is 100 GB and uses three-way mirror, its VirtualDisk FootprintOnPool should be about 300 GB, for 33.3% efficiency.

Case Two: Tiers

Ok, that wasn’t so bad. Now, what if the VirtualDisk is tiered? Actually, what is tiering?

For our purposes, tiering is when multiple sets of these properties coexist in one VirtualDisk, because it is effectively part mirror, part parity. You can tell this is happening if its ResiliencySettingName and PhysicalDiskRedundancy properties are completely blank. (Helpful! Thanks!)

The secret is: an extra layer in our stack – the StorageTier objects.

Storage Management API Stack

Sometimes, volumes stash some properties on their StorageTier(s).

Let’s grab these, because its their properties we need. As before, we can follow associations.

$Tiers = $VirtualDisk | Get-StorageTier

Typically, we expect to get two, one called something like “Performance” (mirror), the other something like “Capacity” (parity). Unlike in 2012 or 2012R2, these tiers are specific to one VirtualDisk. Each has all the same key properties we got before from the VirtualDisk itself – namely ResiliencySettingName, PhysicalDiskRedundancy, Size, and FootprintOnPool.

Check it out!

$Tiers | Select FriendlyName, ResiliencySettingName, PhysicalDiskRedundancy, Size, FootprintOnPool

For each tier, if we divide the Size by the FootprintOnPool, we can obtain its storage efficiency.

Moreover, if we divide the sum of the sizes by the sum of the footprints, we obtain the overall efficiency of the mixed resiliency or “multi-resilient” Volume (or… VirtualDisk…? Whatever!).

U Can Haz Script

This script puts it all together, along with some formatting/prettifying magic, to produce this view. You can easily see your volumes, their capacity, how they’re filling up, how much physical capacity they occupy (and why), and the implied storage efficiency, in one easy table.

Let me know what you think!

Screenshot

Volumes, their capacities, how they’re filling up, resiliency, footprints, efficiency, all in one easy view.

Notes:

  1. This screenshot was taken on a 4-node system. At 16 nodes, Dual Parity can reach up to 80.0% efficiency.
  2. Because it queries so many objects and associations in SM-API, the script can take up to several minutes to run.
  3. You can download the script here, to spare yourself the 200-line copy/paste: http://cosmosdarwin.com/Show-PrettyVolume.ps1
# Written by Cosmos Darwin, PM
# Copyright (C) 2016 Microsoft Corporation
# MIT License
# 8/2016

Function ConvertTo-PrettyCapacity {

    Param (
        [Parameter(
            Mandatory=$True,
            ValueFromPipeline=$True
            )
        ]
    [Int64]$Bytes,
    [Int64]$RoundTo = 0 # Default
    )

    If ($Bytes -Gt 0) {
        $Base = 1024 # To Match PowerShell
        $Labels = ("bytes", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB") # Blame Snover
        $Order = [Math]::Floor( [Math]::Log($Bytes, $Base) )
        $Rounded = [Math]::Round($Bytes/( [Math]::Pow($Base, $Order) ), $RoundTo)
        [String]($Rounded) + $Labels[$Order]
    }
    Else {
        0
    }
    Return
}


Function ConvertTo-PrettyPercentage {

    Param (
        [Parameter(Mandatory=$True)]
            [Int64]$Numerator,
        [Parameter(Mandatory=$True)]
            [Int64]$Denominator,
        [Int64]$RoundTo = 0 # Default
    )

    If ($Denominator -Ne 0) { # Cannot Divide by Zero
        $Fraction = $Numerator/$Denominator
        $Percentage = $Fraction * 100
        $Rounded = [Math]::Round($Percentage, $RoundTo)
        [String]($Rounded) + "%"
    }
    Else {
        0
    }
    Return
}

### SCRIPT... ###

$Output = @()

# Query Cluster Shared Volumes
$Volumes = Get-StorageSubSystem Cluster* | Get-Volume | ? FileSystem -Eq "CSVFS"

ForEach ($Volume in $Volumes) {

    # Get MSFT_Volume Properties
    $Label = $Volume.FileSystemLabel
    $Capacity = $Volume.Size | ConvertTo-PrettyCapacity
    $Used = ConvertTo-PrettyPercentage ($Volume.Size - $Volume.SizeRemaining) $Volume.Size

    If ($Volume.FileSystemType -Like "*ReFS") {
        $Filesystem = "ReFS"
    }
    ElseIf ($Volume.FileSystemType -Like "*NTFS") {
        $Filesystem = "NTFS"
    }

    # Follow Associations
    $Partition   = $Volume    | Get-Partition
    $Disk        = $Partition | Get-Disk
    $VirtualDisk = $Disk      | Get-VirtualDisk

    # Get MSFT_VirtualDisk Properties
    $Footprint = $VirtualDisk.FootprintOnPool | ConvertTo-PrettyCapacity
    $Efficiency = ConvertTo-PrettyPercentage $VirtualDisk.Size $VirtualDisk.FootprintOnPool

    # Follow Associations
    $Tiers = $VirtualDisk | Get-StorageTier

    # Get MSFT_VirtualDisk or MSFT_StorageTier Properties...

    If ($Tiers.Length -Lt 2) {

        If ($Tiers.Length -Eq 0) {
            $ReadFrom = $VirtualDisk # No Tiers
        }
        Else {
            $ReadFrom = $Tiers[0] # First/Only Tier
        }

        If ($ReadFrom.ResiliencySettingName -Eq "Mirror") {
            # Mirror
            If ($ReadFrom.PhysicalDiskRedundancy -Eq 1) { $Resiliency = "2-Way Mirror" }
            If ($ReadFrom.PhysicalDiskRedundancy -Eq 2) { $Resiliency = "3-Way Mirror" }
            $SizeMirror = $ReadFrom.Size | ConvertTo-PrettyCapacity
            $SizeParity = [string](0)
        }
        ElseIf ($ReadFrom.ResiliencySettingName -Eq "Parity") {
            # Parity
            If ($ReadFrom.PhysicalDiskRedundancy -Eq 1) { $Resiliency = "Single Parity" }
            If ($ReadFrom.PhysicalDiskRedundancy -Eq 2) { $Resiliency = "Dual Parity" }
            $SizeParity = $ReadFrom.Size | ConvertTo-PrettyCapacity
            $SizeMirror = [string](0)
        }
        Else {
            Write-Host -ForegroundColor Red "What have you done?!"
        }
    }

    ElseIf ($Tiers.Length -Eq 2) { # Two Tiers

        # Mixed / Multi- / Hybrid
        $Resiliency = "Mix"

        ForEach ($Tier in $Tiers) {
            If ($Tier.ResiliencySettingName -Eq "Mirror") {
                # Mirror Tier
                $SizeMirror = $Tier.Size | ConvertTo-PrettyCapacity
                If ($Tier.PhysicalDiskRedundancy -Eq 1) { $Resiliency += " (2-Way" }
                If ($Tier.PhysicalDiskRedundancy -Eq 2) { $Resiliency += " (3-Way" }
            }
        }
        ForEach ($Tier in $Tiers) {
            If ($Tier.ResiliencySettingName -Eq "Parity") {
                # Parity Tier
                $SizeParity = $Tier.Size | ConvertTo-PrettyCapacity
                If ($Tier.PhysicalDiskRedundancy -Eq 1) { $Resiliency += " + Single)" }
                If ($Tier.PhysicalDiskRedundancy -Eq 2) { $Resiliency += " + Dual)" }
            }
        }
    }

    Else {
        Write-Host -ForegroundColor Red "What have you done?!"
    }

    # Pack

    $Output += [PSCustomObject]@{
        "Volume" = $Label
        "Filesystem" = $Filesystem
        "Capacity" = $Capacity
        "Used" = $Used
        "Resiliency" = $Resiliency
        "Size (Mirror)" = $SizeMirror
        "Size (Parity)" = $SizeParity
        "Footprint" = $Footprint
        "Efficiency" = $Efficiency
    }
}

$Output | Sort Efficiency, Volume | FT

Windows Server 2016 Dedup Documentation Now Live!

$
0
0

Hi all!

We just released the Data Deduplication documentation for Windows Server 2016 over on TechNet! The new documentation includes a more detailed explanation of how Dedup works, crisper guidance on how to evaluate workloads for deduplication, and information on the available Dedup settings, with context for why you would want to change them.

Check it out:

Screenshot of the Data Deduplication documentation on TechNet

As always, questions, concerns, or feedback are very welcome! Please feel free to comment at the bottom of this post, or reach out to us directly at dedupfeedback@microsoft.com.


Work Folders and Offline Files support for Windows Information Protection

$
0
0

Hi all,

I’m Jeff Patterson, Program Manager for Work Folders and Offline Files.

Windows 10, version 1607 will be available to Enterprise customers soon so I wanted to cover support for Windows Information Protection (a.k.a. Enterprise Data Protection) when using Work Folders or Offline Files.

Windows Information Protection Overview

Windows Information Protection (WIP) is a new security feature introduced in Windows 10, version 1607 to protect against data leaks.

Benefits of WIP

  • Separation between personal and corporate data, without requiring employees to switch environments or apps
  • Additional data protection for existing line-of-business apps without a need to update the apps
  • Ability to wipe corporate data from devices while leaving personal data alone
  • Use of audit reports for tracking issues and remedial actions
  • Integration with your existing management system (Microsoft Intune, System Center Configuration Manager 2016, or your current mobile device management (MDM) system) to configure, deploy, and manage WIP for your company

For additional information on Windows Information Protection, please reference our TechNet documentation.

Work Folders support for Windows Information Protection

Work Folders was updated in Windows 10 to support Windows Information Protection.

If a WIP policy is applied to a Windows 10 device, all user data stored in the Work Folders directory will be encrypted using the same key and Enterprise ID that is used by Windows Information Protection.

Note: The user data is only encrypted on the Windows 10 device. When the user data is synced to the Work Folders server, it’s not encrypted on the server. To encrypt the user data on the Work Folders server, you need to use RMS encryption.

Offline Files and Windows Information Protection

Offline Files (a.k.a. Client Side Caching) is an older file sync solution and was not updated to support Windows Information Protection. This means any user data stored on a network share that’s cached locally on the Windows 10 device using Offline Files is not protected by Windows Information Protection.

If you’re currently using Offline Files, our recommendation is to migrate to a modern file sync solution such as Work Folders or OneDrive for Business which supports Windows Information Protection.

If you decide to use Offline Files with Windows Information Protection, you need to be aware of the following issue if you try to open cached files while working offline:

Can’t open files offline when you use Offline Files and Windows Information Protection
https://support.microsoft.com/en-us/kb/3187045

Conclusion

Offline Files does not support Windows Information Protection, you should use a modern file sync solution such as Work Folders or OneDrive for Business that supports WIP.

Volume resiliency and efficiency in Storage Spaces Direct

$
0
0

Hello, Claus here again.

One of the most important aspects when creating a volume is to choose the resiliency settings. The purpose of resiliency is to provide resiliency in case of failures, such as failed drive or a server failure. It also enables data availability when performing maintenance, such as server hardware replacement or operating system updates. Storage Spaces Direct supports two resiliency types; mirror and parity.

Mirror resiliency

Mirror resiliency is relatively simple. Storage Spaces Direct generates multiple block copies of the same data. By default, it generates 3 copies. Each copy is stored on a drive in different servers, providing resiliency to both drive and server failures. The diagram shows 3 data copies (A, A’ and A’’) laid out across a cluster with 4 servers.

Volume1

Figure 1 3-copy mirror across 4 servers

Assuming there is a failure on the drive in server 2 where A’ is written. A’ is regenerated from reading A or A’’ and writing a new copy of A’ on another drive in server 2 or any drive in server 3. A’ cannot be written to drives in server 1 or server 4 since it is not allowed to have two copies of the same data in the same server.

If the admin puts a server in maintenance mode, the corresponding drives also enters maintenance mode. While maintenance mode suspends IO to the drives, the administrator can still perform drive maintenance tasks, such as updating drive firmware. Data copies stored on the server in maintenance mode will not be updated since IOs are suspended. Once the administrator takes the server out of maintenance mode, the data copies on the server will be updated using data copies from other servers. Storage Spaces Direct tracks which data copies are changed while the server is in maintenance mode, to minimize data resynchronization.

Mirror resiliency is relatively simple, which means it has great performance and does not have a lot of CPU overhead. The downside to mirror resiliency is that it is relatively inefficient, with 33.3% storage efficiency when storing 3 full copies of all data.

Parity resiliency

Parity resiliency is much more storage efficient compared to mirror resiliency. Parity resiliency uses parity symbols across a larger set of data symbols to drive up storage efficiency. Each symbol is stored on a drive in different servers, providing resiliency to both drive and server failures. Storage Spaces Direct requires at least 4 servers to enable parity resiliency. The diagram shows two data symbols (X1 and X2) and two parity symbols (P1 and P2) laid out across a cluster with 4 servers.

Volume2

Figure 2 Parity resiliency across 4 servers

Assuming there is a failure on the drive in server 2 where X2 is written. X2 is regenerated from reading the other symbols (X1, P1 and P2), recalculate the value of X2 and write X2 on another drive in server 2. X2 cannot be written to drives in others servers, since it is not allowed to have two symbols in the same symbol set in the same server.

Parity resiliency works similar to mirror resiliency when a server is in maintenance mode.

Parity resiliency has better storage efficiency than mirror resiliency. With 4 servers the storage efficiency is 50%, and it can be as high as 80% with 16 servers. The downside of parity resiliency is twofold:

  • Performing data reconstruction involves all of the surviving symbols. All symbols are read, which is extra storage IO, Lost symbols are recalculated, which incurs expensive CPU cycles and written back to disk.
  • Overwriting existing data involves all symbols. All data symbols are read, data is updated, parity is recalculated, and all symbols are written. This is also known as Read-Modify-Write and incurs significant storage IO and CPU cycles.

Local Reconstruction Codes

Storage Spaces Direct uses Reed-Solomon error correction (aka erasure coding) for parity calculation in smaller deployments for the best possible efficiency and resiliency to two simultaneous failures. A cluster with four servers has 50% storage efficiency and resiliency to two failures. With larger clusters storage efficiency is increased as there can be more data symbols without increasing the number of parity symbols. On the flip side, data reconstruction becomes increasingly inefficient as the total number of symbols (data symbols + parity symbols) increases, as all surviving symbols will have to be read in order to calculate and regenerate the missing symbol(s). To address this, Microsoft Research invented Local Reconstruction Codes, which is being used in Microsoft Azure and Storage Spaces Direct.

Local Reconstruction Codes (LRC) optimizes data reconstruction for the most common failure scenario, which is a single drive failure. It does so by grouping the data symbols and calculate a single (local) parity symbol across the group using simple XOR. It then calculates a global parity across all the symbols. The diagram below shows LRC in a cluster with 12 servers.

Volume3

Figure 3 LRC in a cluster with 12 servers

In the above example we have 11 symbols, 8 data symbols represented by X1, X2, X3, X4, Y1, Y2, Y3 and Y4, 2 local parity symbols represented by PX and PY, and finally one global parity symbol represented by Q. This particular layout is also sometimes described as (8,2,1) representing 8 data symbols, 2 groups and 1 global parity.

Inside each group the parity symbol is calculated as simple XOR across the data symbols in the group. XOR is not a very computational intensive operation and thus requires few CPU cycles. Q is calculated using the data symbols and local parity symbols across all the groups. In this particular configuration, the storage efficiency is 8/11 or ~72%, as there are 8 data symbols out of 11 total symbols.

As mentioned above, in storage systems a single failure is more common than multiple failures and LRC is more efficient and incurs less storage IO when reconstructing data in the single device failure scenario and even some multi-failure scenarios.

Using the example from figure 3 above:

What happens if there is one failure, e.g. the disk that stores X2 fails? In that case X2 is reconstructed by reading X1, X2, X3 and PX (four reads), perform XOR operation (simple), and write X2 (one write) on a different disk in server 2. Notice that none of the Y symbols or the global parity Q are read or involved in the reconstruction.

What happens if there are two simultaneous failures, e.g. the disk that stores X1 fails and the disk that stores Y2 also fails. In this case, because the failures occurred in two different groups, X1 is reconstructed by reading X2, X3, X4 and PX (four reads), perform XOR operation, and write X1 (one write) on a different disk in server 1. Similarly, Y2 is reconstructed by reading Y1, Y3, Y4 and PY (four reads), perform XOR operation, and write Y2 (one write) to a different disk in server 5. A total of eight reads and two writes. Notice that only simple XOR was involved in data reconstruction thus reducing the pressure on the CPU.

What happens if there are two failures in the same group, e.g. the disks that stores X1 and X2 have both failed. In this case X1 is reconstructed by reading X3, X4 PX, Y1, Y2, Y3, Y4 and Q (8 reads), perform erasure code computation and write X1 to a different disk in server 1. It is not necessary to read PY, since it can be calculated it from knowing Y1, Y2, Y4 and Y4. Once X1 is reconstructed, X2 can be reconstructed using the same mechanism described for one failure above, except no additional reads are needed.

Notice how, in the example above, one server does not have symbols? This configuration allows reconstruction of symbols even in the case where a server has malfunctioned and is permanently retired, after which the cluster effective will have only 11 servers until a replacement server is added to the cluster.

The number of data symbols in a group depends on the cluster size and the drive types being used. Solid state drives perform better, so the number of data symbols in a group can be larger. The below table, outlines the default erasure coding scheme (RS or LRC) and the resulting efficiency for hybrid and all-flash storage configuration in various cluster sizes.

 

Servers

SSD + HDD

All SSD

  Layout Efficiency Layout Efficiency
4 RS 2+2 50% RS 2+2 50%
5 RS 2+2 50% RS 2+2 50%
6 RS 2+2 50% RS 2+2 50%
7 RS 4+2 66% RS 4+2 66%
8 RS 4+2 66% RS 4+2 66%
9 RS 4+2 66% RS 6+2 75%
10 RS 4+2 66% RS 6+2 75%
11 RS 4+2) 66% RS 6+2 75%
12 LRC (8,2,1) 72% RS 6+2 75%
13 LRC (8,2,1) 72% RS 6+2 75%
14 LRC (8,2,1) 72% RS 6+2 75%
15 LRC (8,2,1) 72% RS 6+2 75%
16 LRC (8,2,1) 72% LRC (12,2,1) 80%

Accelerating parity volumes

In Storage Spaces Direct it is possible to create a hybrid volume. A hybrid volume is essentially a volume where some of the volume uses mirror resiliency and some of the volume uses parity resiliency.

Volume4

Figure 4 Hybrid Volume

The purpose of mixing mirror and parity in the volume is to provide a balance between storage performance and storage efficiency. Hybrid volumes require the use of the ReFS on-disk file system as it is aware of the volume layout:

  • ReFS always writes data to the mirror portion of the volume, taking advantage of the write performance of mirror
  • ReFS rotates data into the parity portion of the volume when needed, taking advantage of the efficiency of parity
  • Parity is only calculated when rotating data into the parity portion
  • ReFS writes updates to data stored in the parity portion by placing new data in the mirror portion and invalidating the old stored in to parity portion – again to take advantage of the write performance of mirror

ReFS starts rotating data into the parity portion at 60% utilization of the mirror portion and gradually becomes more aggressive in rotating data as utilization increases. It is highly desirable to:

  • Size the mirror portion to twice the size of the active working set (hot data) to avoid excessive data rotation
  • Size the overall volume to always have 20% free space to avoid excessive fragmentation due to data rotation

Conclusion

I hope this blog post helps provide more insight into how mirror and parity resiliency works in Storage Spaces Direct, how data is laid out across servers, and how data is reconstructed in various failure cases.

We also discussed how Local Reconstruction Codes (LRC) increases the efficiency of data reconstruction in both reduced storage IO churn and CPU cycles, and overall helps reach a healthy system quicker.

And finally we discussed how hybrid volumes provide a balance between the performance of mirror and the efficiency of parity.

Let me know what you think.

Until next time

Claus

 

 

 

 

Survey: Why R2?

$
0
0

Hi folks, Ned here again. I have a very quick survey for you if you’re a Windows Server customer. It is anonymous, only has one mandatory question, and one secondary optional question – shouldn’t take you more than 30 seconds and will help us understand our customer base better.

Why do you deploy R2 versions so much more than non-R2?

Thanks in advance.

  • Ned “actual monkey” Pyle

Stop using SMB1

$
0
0

Hi folks, Ned here again and today’s topic is short and sweet:

Stop using SMB1. Stop using SMB1. STOP USING SMB1!

Earlier this week we released MS16-114, a security update that prevents denial of service and remote code execution. If you need this security patch, you already have a much bigger problem: you are still running SMB1.

The original SMB1 protocol is nearly 30 years old, and like much of the software made in the 80’s, it was designed for a world that no longer exists. A world without malicious actors, without vast sets of important data, without near-universal computer usage. Frankly, its naivete is staggering when viewed though modern eyes. I blame the West Coast hippy lifestyle.

Let me explain why this protocol needs to hit the landfill.

SMB1 isn’t safe

When you use SMB1, you lose key protections offered by later SMB protocol versions:

The nasty bit is that no matter how you secure all these things, if your clients use SMB1, then a man-in-the-middle can tell your client to ignore all the above. All they need to do is block SMB2+ on themselves and answer to your server’s name or IP. Your client will happily derp away on SMB1 and share all its darkest secrets unless you required encryption on that share to prevent SMB1 in the first place. This is not theoretical – we’ve seen it. We believe this so strongly that when we introduced Scaleout File Server, we explicitly prevented SMB1 access to those shares!

SMB1 isn’t modern or efficient

When you use SMB1, you lose key performance and productivity optimizations for end users.

  • Larger reads and writes (2.02+)- more efficient use of faster networks or higher latency WANs. Large MTU support.
  • Peer caching of folder and file properties (2.02+) – clients keep local copies of folders and files via BranchCache
  • Durable handles (2.02, 2.1) – allow for connection to transparently reconnect to the server if there is a temporary disconnection
  • Client oplock leasing model (2.02+) – limits the data transferred between the client and server, improving performance on high-latency networks and increasing SMB server scalability
  • Multichannel & SMB Direct (3.0+) – aggregation of network bandwidth and fault tolerance if multiple paths are available between client and server, plus usage of modern ultra-high throughout RDMA infrastructure
  • Directory Leasing (3.0+) – Improves application response times in branch offices through caching

SMB1 isn’t usually necessary

This is the real killer: there are very few cases left in any modern enterprise where SMB1 is the only option. Some legit reasons:

  1. You’re still running XP or WS2003 under a custom support agreement.
  2. You have some decrepit management software that demands admins browse via the ‘network neighborhood’ master browser list.
  3. You run old multi-function printers with antique firmware in order to “scan to share”.

None of these things should affect the average end user or business. Unless you let them.

We work carefully with partners in the storage, printer, and application spaces all over the world to ensure they provide at least SMB2 support and have done so with annual conferences and plugfests for six years. Samba supports SMB 2 and 3. So does OSX and MacOS. So do EMC, NetApp, and their competitors. So do our licensed SMB providers like Visuality and Tuxera, who also help printer manufacturers join the modern world.

A proper IT pro is always from Missouri though. We provide SMB1 usage auditing in Windows 10 and Windows Server 2016 just to be sure. That way you can configure your Windows Servers to see if disabling SMB1 would break someone:

Set-SmbServerConfiguration –AuditSmb1Access $true

Then just examine the SMBServer\Audit event log on the systems. If you have older servers than WS2016, now is good time to talk upgrade. Ok, that’s a bit extortionist – now is the time to talk to your blue teams, network teams, and other security folks about if and where they are seeing SMB1 usage on the network. If they have no idea, they need to get one. If you still don’t know because this is a smaller shop, run your own network captures on a sample of your servers and clients, see if SMB1 appears.  

SMB1 removal isn’t hard

Starting in Windows 8.1 and Windows Server 2012 R2, we made removal of the SMB1 feature possible and trivially easy.

On Server, the Server Manager approach:

image

On Server, the PowerShell approach (Remove-WindowsFeature FS-SMB1):

image

On Client, the add remove programs approach (appwiz.cpl):

image

On Client, the PowerShell approach (Disable-WindowsOptionalFeature -Online -FeatureName smb1protocol)

image

On legacy operating systems:

When using operating systems older than Windows 8.1 and Windows Server 2012 R2, you can’t remove SMB1 – but you can disable it: KB 2696547- How to enable and disable SMBv1, SMBv2, and SMBv3 in Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, and Windows Server 2012

A key point: when you begin the removal project, start at smaller scale and work your way up. No one says you must finish this in a day.

SMB1 isn’t good

Stop using SMB1. For your children. For your children’s children. Please. We’re begging you.

– Ned “and the rest of the SMB team at Microsoft” Pyle

The not future of SMB1 – another MS engineering quickie survey

Viewing all 268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>