Quantcast
Channel: Storage at Microsoft
Viewing all 268 articles
Browse latest View live

IFS Plugfest 28 is coming – sign up now

$
0
0

Hiya folks, Ned here again. Our friends on Microsoft File System Filter team asked that we share some upcoming Plugfest info with you. Have a gander:

—————————————————————————

We are pleased to inform you that IFS Plugfest 28 has been scheduled. Here are some preliminary event details:

When

Monday, April 25th to Friday, April 29th, 2016. The event begins at 9am and ends at 6pm each day, except for Friday when it ends at 3pm

Where

Building 37, rooms 1717-1727, Microsoft Campus, Redmond, Washington, USA.

Audience

Independent Software Vendors (ISVs) and developers writing file system filter drivers and/or network filter driver for Windows

Cost

Free– There is no cost to attend this event.  Attendees are responsible for their own travel and lodging.

Goal

  • Compatibility testing with Windows vNext and other file system filters as well as network filter drivers
  • Ensuring end users have a smooth upgrade from Windows 7 and above to Windows vNext

Benefits

  • The opportunity to test products extensively for interoperability with other vendors’ products and with Microsoft products. This has traditionally been a great way to understand interoperability scenarios and flush out any interoperability-related bugs.
  • Talks and informative sessions organized by the File System Filters & Network Filter team about topics that affect the filter driver community.
  • Opportunities to meet with the file system team, the network team, the cluster team, and various other teams at Microsoft and get answers to technical questions.

Registration

To register, please fill in the Registration Form before February 19th 2016. We will follow up through email to confirm your registration. Due to constraints in space and resources at this Plugfest, ISVs are required to limit their participation to a maximum of two persons representing a product to be tested for interoperability issues. There will be no exceptions to this rule, so please plan for the event accordingly. Please look for messages from fsfcomm@microsoft.com for registration confirmation.

You can learn more about File System Filters and Network Filters in the given links.

—————————————————————————

Sounds like fun. Be sure to get back to them ASAP; last year I saw people turned away because they waited too long.

– Ned “The Gatekeeper” Pyle


Offline Files and Continuous Availability: the monstrous union you should not consecrate

$
0
0

Hi all, Ned here again with a quick chat about mixing Continuous Availability and Offline Files. As you know, we have several public docs recommending against combining CA and Client Side Caching (aka CSC aka Offline Files) because when users attempt to go offline, it will take up to six minutes. This usually leads to unhappy humans and applications. Today I’ll explain more and give you some options.

The inherent problem

CA was designed for the Scale-out File Server workload, and it provides both disk write-through guarantees and “transparent failover”, where a client re-attaches to file handles after a cluster failover, thanks to a resume key filter running on the server. This means that applications like Hyper-V and SQL Server continue to dish out their virtual machines and databases when a storage cluster node reboots. You can enable CA on non-SOFS shares in a cluster, and you can use end-user applications like Word with them, but for a variety of reasons, it’s not something we recommend. On a standalone file server, you cannot configure it at all. Windows Server 2012 set the precedent of enabling CA by default on all clustered shares; something I now regret but cannot change.

CSC was designed for branch office and mobile users back when networks were hilarious. A user could cache their unstructured data locally and synchronize with a file server over SMB. By Windows 7 and 8, it was a pretty decent system, with background sync and offline functionality that allowed a user to seamlessly roam while IT got centralized backups.

The root cause of the issues between CSC and CA? That’s easy: we wrote Offline Files in 1998 and Continuous Availability in 2010. They are products of very different networks, clients, and strategies – but all laid on top of a single protocol family, SMB. They were never designed to interoperate. Heck, we didn’t find the 6-minute timeout issue – a customer did, more than two years after release. Offline transitions did not happen quickly enough and applications saw long hangs when trying to access an unreachable share, or the opposite, where data was saved to the local cache instead of being durably persisted on the server. The experience is crummy.

They were justifiably… displeased.

image
Or maybe he just tried one of Rick Claus’ beers?

The difference between Windows 8 and Windows 10

Windows 8.1 and Windows 10 support SMB3, allowing them to utilize CA, transparent failover, and Scale-out File Servers. But due to the aforementioned timeout problem that many customers reported, we decided in Windows 10 to fix the glitch.

Starting in Win10, when you connect to a CA-enabled share, there is no longer an option to use offline files. No matter the settings, files will not cache and the user will not run into timeout problems.

image

When you mark a share as “continuously available” you are essentially committing to the following contract:

  1. The SMB server must always be available.
  2. Any data written to the server needs durable persistence on disk and must be resilient to disk failures.
  3. The network between the client and the server is expected to be fault tolerant and high speed.

The CA feature tries to hide transient failures at any of the above three interfaces from applications by holding and resuming handles. Remember, it’s for high-throughput, high-availability, high-IO, mission critical server applications like SQL Server and Hyper-V. #3 above directly conflicts with “offline files” – which assumes flakey, slow, and intermittent network connectivity.

Naturally, this decision to change the behavior in Win10 may not make your day, which leads me to:

A variety of alternatives

  • Use Work Folders. The future is definitely Work Folders. CSC hasn’t had a feature update since Windows 8, and that should be a strong sign that it probably never will. We have moved into a new phase, where users want to sync their data from PCs, phones, and tablets – and not necessarily all running Windows nor SMB. Work Folders brings all that to the table, and is actively under development and accepting feedback. Heck, Jane the Work Folders PM wrote about it constantly. She never sleeps.
  • Just use CA. If you are looking for data consistency and transparent failover for non-mobile users, stop using CSC. Disable it on all your CA shares using Server Manager, Failover Cluster Manager or Set-SmbShare:

image

image

  • Just use CSC. If you are looking for mobile user and crumbling network support, stop using CA. Disable it on your shares using Server Manager, Failover Cluster Manager, Set-SmbShare, or Explorer:

image

image

  • Use a two-share combination. If you have a hybrid set of mobile users and desktop users all accessing the same data, nothing is stopping you from creating two shares to the same data – one with CA enabled and one with CSC enabled. Then your users can select the share that matches their needs.

image

imageimage

The future

We heard the Windows 8.1 feedback and changed the behavior in Windows 10 to stop the timeout issue. We then had feedback from Windows 10 customers that wanted it back the old way. So we decided to add this ability back into a future release of Windows 10 as an off-by-default opt-in, so that you can return to the Windows 8.1 behavior when desired. When it releases, I’ll update this blog post.

When you have hundreds of millions of Windows computers, it’s a tricky balancing act to please everyone anyone. But this is proof that we are always listening and adjusting.

Finally

Hibbert: You know, isn’t it interesting how the left – or sinister – twin is invariably the evil one. I had this theory that… Wait a minute. Hugo’s scar is on the wrong side. He couldn’t have been the evil left twin. That means the evil twin is, and always has been… Bart!

Bart: Oh, don’t look so shocked.

Hibbert: Well, chalk this one up to carelessness on my part.

    – Pobody’s Nerfect, “Treehouse of Horror VII”, The Simpsons

Until next time,

– Ned “Hugo” Pyle

Work Folders for Android – Released

$
0
0

We are happy to announce that an Android app for Work Folders has been released into the Google PlayStore® and is available as a free download.

 

 

A screenshot of the Work Folders for Android app in the Google Playstore

Work Folders for Android app in the Google PlayStore

 

– There also is a version for iPad and an iPhone version.

 

Overview

Work Folders is a Windows Server feature since 20012R2 that allows individual employees to access their files securely from inside and outside the corporate environment. This app connects to it and enables file access on your Android Phone and Tablet. Work Folders enables this while allowing the organization’s IT department to fully secure that data.

This app features an intuitive UI, selective sync, end-to-end encryption and search.
In order to view files one can open-in the file in an appropriate app on the device.

Work Folders for Android - file browser

Work Folders for Android – file browser

 

 Work Folders App on Android – Features

 

  • Pin files for offline viewing.
    Saves storage space by fully showing all available files but locally storing and keeping in sync only the files you care about.
  • Files are stored encrypted at all times. On the wire and at rest on the device.
  • Access to the app is protected by an app passcode – keeping others out even if the device is left unlocked and unattended.
  • Allows for DIGEST and Active Directory Federation Services (ADFS) authentication mechanisms including multi factor authentication.
  • Search for files and folders
  • Open files in other apps that might be specialized to work with a certain file type.

 

Work Folders offers encryption at rest and super fast access on-demand. Quite difficult to take that screenshot :)

Work Folders offers encryption at rest and super fast access on-demand. Quite difficult to take that screenshot :)

 

Find files and folders as you type.

Find files and folders as you type.

 Some Android phone screenshots

We also support the phone form factor: Here showing offline access to pinned files.

We also support the phone form factor: Here showing offline access to pinned files.

Protecting your files via an app-passcode

Protecting your files via an app-passcode.

Android Version support

Work Folders for Android is supported on all devices running Android Version 4.4 KitKat and later.
At release we are covering therefore a little over 70% of the Android market and believe our coverage will increase over time.

 

Blogs and Links

If you’re interested in learning more about Work Folders, here are some great resources:

 

 

All the goods

 

Introduction and Getting Started

 

 

Advanced Work Folders Deployment and Management

 

 

Videos

 

Deploy an entire Windows Server 2016 Software Defined Storage lab in minutes

$
0
0

Heya folks, Ned here again. Superstar Microsoft PFE Jaromir Kaspar recently posted a Windows Server 2016 lab creation tool on CodePlex and I highly recommend it. Download the Windows Server 2016 ISO file and bing-bang-bong, the tool kicks out an entire working datacenter of clusters, Storage Spaces Direct, Storage Replica, scale-out file servers, Nano servers, and more. All the Software Defined Storage you could ever want, with none of the boring build time – just straight into the usage and tinkering and learning. It’s a treat.

You just run the 1_prereq.ps1 file and it readies the lab. Then run 2_createparentdisks.ps1 and pick an ISO to create your VHDXs. Then use 3_deploy.ps1 to blast out everything. Voila. So simple, a Ned can do it.

image

Download from here. Steps and docs are here. Questions and comments go here. What are you waiting for?

– Ned “work smarter, not harder” Pyle

(Cross post) Industry Collaborates on Standards to Tackle Management Challenges of Multi-Vendor Data Centers

$
0
0

Hi folks, Ned here again with some words from our MS colleagues in private cloud management and Mohan Kumar at Intel:

Private and hybrid cloud datacenters with software defined infrastructure increasingly deploy horizontal, scale-out solutions, which often include large quantities of simple servers. Microsoft recognized this shift in industry early on and started to invest in software defined network, storage, and compute technologies in Windows Server. Specifically in the area of compute management, Microsoft actively contributed to the development of Redfish, an open industry standard specification and schema that specifies a RESTful interface and utilizes JSON and OData to help customers integrate solutions within their existing tool chains.

Dell, Hewlett Packard Enterprise, Intel, Microsoft and VMware recently formed a working group to contribute extensions to Redfish that provide support for Compute, PCIe Switch, Storage and Switched Networking. The following blog post was originally posted on Intel’s site:

Original author: Mohan Kumar (Intel)

Scalability in today’s data center is increasingly achieved with horizontal, scale-out solutions, employing large quantities of servers. The usage model of scale-out infrastructure is drastically different than that of traditional enterprise data centers. Redfish, an open industry standard released in 2015, was designed to provide a new approach for simple, modern and secure management of scalable platform hardware and enable next-generation datacenter infrastructures.

Five leading companies in datacenter solutions – Dell, Hewlett Packard Enterprise, Intel, Microsoft and VMware – are contributing to industry standard leadership efforts by extending the capabilities of Redfish. This work will support a Software Defined Infrastructure (SDI) where orchestration software can automate provisioning and configuration in multi-vendor and open source environments to meet application and operational policies.

Read the rest of this blog post here: https://communities.intel.com/community/itpeernetwork/datastack/blog/2016/03/14/multi-vendor-data-centers

SMB Transparent Failover – making file shares continuously available

$
0
0

Note: this post originally appeared on https://aka.ms/clausjor by Claus Joergensen.

SMB Transparent Failover is one of the key features in the feature set introduced in Server Message Block (SMB) 3.0. SMB 3.0 is new in Windows Server 2012 and Windows 8. I am the program manager for SMB Transparent Failover and in this blog post I will give an overview of this new feature.

In Windows Server 2012 the file server introduces support for storing server application data, which means that server applications, like Hyper-V and SQL Server, can store their data files, such as virtual machine files or SQL databases on Windows file shares. These server applications expect their storage to reliable and always available and they do not generally handle IO errors or unexpected closures of handles very well. If the server application cannot access its storage this often leads to databases going offline or virtual machines stopping or crashing because they can no longer write to their disk.

SMB Transparent Failover enables administrators to configure Windows file shares, in Windows Failover Clustering configurations, to be continuously available. Using continuously available file shares enables administrators to perform hardware or software maintenance on any cluster node without interrupting the server applications that are storing their data files on these file shares. Also, in case of a hardware or software failure, the server application nodes will transparently reconnect to another cluster node without interruption of the server applications. In case of a SMB scale-out file share (more on Scale-Out File Server in a following blog post), SMB Transparent Failover allows the administrator to redirect a server application node to a different file server cluster node to facilitate better load balancing.

For more information on storing server application data on SMB file shares and other features to support this scenario, see Windows Server “8” – Taking Server Application Storage to Windows File Shares

 

Installation and configuration

SMB Transparent Failover has the following requirements:

  • A failover cluster running Windows Server 2012 with at least two nodes. The configuration of servers, storage and networking must pass the all tests performed in the Validate a Configuration wizard.
  • File Server role is installed on all cluster nodes.
  • Clustered file server configured with one or more file shares created with the continuously available property. This is the default setting.
  • SMB client computers running the Windows 8 client or Windows Server 2012.
  • To realize SMB Transparent Failover, both the SMB client computer and the SMB server computer must support SMB 3.0, which is introduced in Windows 8 and Windows Server 2012. Computers running down-level SMB versions, such as 1.0, 2.0 or 2.1 can connect and access data on a file share that has the continuously available property set, but will not be able to realize the benefits of the SMB Transparent Failover feature.

    Installing and creating a Failover Cluster

    Information about how to install the Failover Clustering feature, creating and troubleshooting a Windows Server 2012 Failover Cluster see these blog posts:

    Installing the File Server role

    Once the Failover Cluster is up and running, we can install the File Server role. Do the following for each node in the Failover Cluster:

    Graphical User Interface
    • Start Server Manager
    • Click Add roles and features
    • In the Add Roles and Features Wizard, do the following:
      • In Before you begin, click Next
      • In Select installation type, click Next
      • In Select destination server, choose the server where you want to install the File Server role, and click Next
      • In Select server roles, expand File And Storage Services, expand File and iSCSI Services, and check the check box for File Server and click Next
      • In Select features, click Next
      • In Confirm installation selections, click Install

    image

    Figure 1 – Installing File Server role

    PowerShell

    In an elevated PowerShell shell, do the following:

    Add-WindowsFeature -Name File-Services

    Create clustered File Server

    Once the File Server role is installed on all cluster nodes, we can create a clustered file server. In this example we will create a clustered file server of type “File Server for general use” and name it SMBFS. I will provide more information on “Scale-Out File Server for application data” in a follow-up blog post.

    Do the following to create a clustered file server.

    Graphical User Interface
    • Start Server Manager
    • Click Tools and select Failover Cluster Manager
    • In the console tree, do the following
    • Select and expand the cluster you are managing

     

  • Select Roles
  • In the Actions pane, click Configure Role
  • In Before You Begin, click Next
  • In Select Role, select File Server and click Next
  • In File Server Type, select the type of clustered file server you want to use
  • In Client Access Point, enter the name of the clustered file server
  • In Client Access Point, complete the Network Address for static IP addressed as needed and click Next
  • In Select Storage, select the disks that you want to assign to this clustered file server and click Next
  • In Confirmation, review your selections and when ready click Next
  • clip_image004Figure 2 – Select File Server Typeclip_image006Figure 3 – Configure Client Access Point

    clip_image008

    Figure 4 – Select Storage

    PowerShell

    In an elevated PowerShell shell, do the following:

    Add-ClusterFileServerRole -Name SMBFS -Storage “Cluster Disk 1″ -StaticAddress 192.168.9.99/24

    Create a file share that is continuously available

    Now that we have created the clustered file server, we can create file shares that are continuously available. In this example we will create a file share named “appstorage” on the clustered file server we created previously.

    Do the following to create a file share that is continuously available:

    Graphical User Interface
    • Start Server Manager
    • Click Tools and select Failover Cluster Manager
    • In the console tree, do the following
    • Select and expand the cluster you are managing

     

  • Select Roles
  • In the Results pane, select the file server where you want to create the file share and in the Actions pane click Add File Share. This will start the New Share Wizard
  • In the New Share Wizard, do the following
    • In Select Profile, select the appropriate profile (SMB Share – Applications in this example) and click Next
  • In Share Location, select the volume where you want to create the share and click Next
  • In Share Name, enter the share name and click Next
  • In Configure Share Setting, verify Enable continuous availability is set and click Next
  • In Specify permissions and control access, modify the permissions as needed to enable access and click Next
  • In Confirmation, review your selections and when ready click Create
  • Click Close
  • clip_image010Figure 5 – Select Profileclip_image012Figure 6 – Select server and path

    clip_image014

    Figure 7 – Share Name

    clip_image016

    Figure 8 – Configure Share Settings

    To verify a share has the continuously available property set, do the following:

    • Start Server Manager
    • Click Tools and select Failover Cluster Manager
    • In the console tree, do the following
    • Select and expand the cluster you are managing

     

  • Select Roles
  • In the Results pane, select the file server you want to examine
  • In the bottom window, click the Shares tab
  • Locate the share of interest and examine the Continuous Availability property
  • PowerShell

    These steps assume the folder for the share is already created. If this is not the case, create folder before continuing.

    In an elevated PowerShell shell on the cluster node where the clustered file server is online, do the following to create a file share with continuous availability property set:

    New-SmbShare -Name AppStorage –Path f:\appstorage –Scope smbfs –FullControl smbtest\administrator

    In an elevated PowerShell shell on the cluster node where the clustered file server is online, do the following to verify a file share has continuous availability property set.

    Get-SmbShare -Name AppStorage | Select *

    PresetPathAcl : System.Security.AccessControl.DirectorySecurity

    ShareState : Online

    AvailabilityType : Clustered

    ShareType : FileSystemDirectory

    FolderEnumerationMode : Unrestricted

    CachingMode : None

    CATimeout : 0

    ConcurrentUserLimit : 0

    ContinuouslyAvailable : True

    CurrentUsers : 0

    Description :

    EncryptData : False

    Name : appstorage

    Path : F:\Shares\appstorage

    Scoped : True

    ScopeName : SMBFS

    SecurityDescriptor : O:BAG:DUD:(A;OICI;FA;;;WD)

    ShadowCopy : False

    Special : False

    Temporary : False

    Volume : \\?\Volume{266f94b0-9640-4e1f-b056-6a3e999e6ecf}\

    Note that we didn’t request the continuous availability property to be set. This is because the property is set by default. If you want to create a file share without the property set, do the following:

    New-SmbShare -Name AppStorage -Path f:\appstorage -Scope smbfs –FullControl smbtest\administrator -ContinuouslyAvailable:$false

    Using a file share that is continuously available

    Now that we have created a clustered file server with a file share that is continuously available, let’s go ahead and use it.

    The below diagram illustrates the setup that I will be using in this section.

    clip_image018

    Figure 9 – Clustered File Server

    On the file share is a 10GB data file (testfile.dat) that is being accessed by an application on the SMB client computer (FSF-260403-10). The below screenshot shows the SMB Client Shares performance counters for \\smbfs\appstorage share as seen from the SMB Client. As you can see the application is doing 8KB reads and writes.

    clip_image020

    Figure 10 – Data Access

    Zeroing in on data requests/sec in graph form, we see the following:

    clip_image022

    In an elevated PowerShell shell on the cluster node where the clustered file server is online, do the following to:

    Get-SmbOpenFile | Select *

    ClientComputerName : [2001:4898:e0:32af:890b:6268:df3b:bf8]

    ClientUserName : SMBTEST\Administrator

    ClusterNodeName :

    ContinuouslyAvailable : True

    Encrypted : False

    FileId : 4415226380557

    Locks : 0

    Path : F:\Shares\appstorage\testfile.dat

    Permissions : 1180059

    ScopeName : SMBFS

    SessionId : 4415226380341

    ShareRelativePath : testfile.dat

    Planned move of the cluster group

    With assurance that the file handle is indeed continuously available, let’s go ahead and move the cluster group to another cluster node. In an elevated PowerShell shell on one of the cluster nodes, do the following to move the cluster group:

    Move-ClusterGroup -Name smbfs -Node FSF-260403-08

    Name      OwnerNode       State

    —-      ———       —–

    smbfs     FSF-260403-08   Online

    Looking at Data Requests/sec in Performance Monitor, we see that there is a short brown-out where IO is stalled of a few seconds while the cluster group is moved, but continues uninterrupted when the cluster group has completed the move.

    The tear down and setup of SMB session, connections and active handles between the SMB client and the SMB server on the cluster nodes is handled completely transparent to the application. The application does not see any errors during this transition, only a brief stall in IO.

    clip_image024

    Figure 11 – Move Cluster Group

    Let’s take a look at the operational log for SMB Client in Event Viewer (Applications and Services Log – Microsoft – Windows – SMB Client – Operational) on the SMB Client computer.

    In the event log we see a series of warning events around 9:36:01PM. These warning events signal the tear down of SMB connections, sessions and shares. There is also a series of information events around 9:36:07PM. These information events signal the recovery of SMB sessions, connections and shares. These events are very useful in understanding the activities during the recovery and that the recovery was successfulJ

    clip_image026

    Figure 12 – Events for planned move

    So how does SMB Transparent Failover actually work? When the SMB client initially connects to the file share, the client determines whether the file share has the continuous availability property set. If it does, this means the file share is a clustered file share and supports SMB transparent failover. When the SMB client subsequently opens a file on the file share on behalf of the application, it requests a persistent file handle. When the SMB server receives a request to open a file with a persistent handle, the SMB server interacts with the Resume Key filter to persist sufficient information about the file handle, along with a unique key (resume key) supplied by the SMB client, to stable storage.

    If a planned move or failure occurs on the file server cluster node to which the SMB client is connected, the SMB client attempts to reconnect to another file server cluster node. Once the SMB client successfully reconnects to another node in the cluster, the SMB client starts the resume operation using the resume key. When the SMB server receives the resume key, it interacts with the Resume Key filter to recover the handle state to the same state it was prior to the failure with end-to-end support (SMB client, SMB server and Resume Key filter) for operations that can be replayed, as well as operations that cannot be replayed. Resume Key filter also protects the handle state after failover to ensure namespace consistency and that the client can reconnect. The application running on the SMB client computer does not experience any failures or errors during this operation. From an application perspective, it appears the I/O operations are stalled for a small amount of time.

    To protect against data loss from writing data into an unstable cache, persistent file handles are always opened with write through.

    Unplanned failure of the active cluster node

    Now, let’s introduce an unplanned failure. The cluster group was moved to FSF-260403-08. Since all these machines are running as virtual machines in a Hyper-V setup, I can use Hyper-V manager to reset FSF-260403-08.

    Looking at Data Requests/sec in Performance Monitor, we see that there is a slightly longer brown-out where IO is stalled. In this time period cluster detects that FSF-260403 has failed and starts the cluster group on another node. Once started, SMB can perform transparent recovery.

    clip_image028

    Figure 13 – Unplanned Failure

    And again the SMBClient event log shows events related to the event:

    clip_image030

    Figure 14 – Events for unplanned failure

    Now you will probably ask yourself: “Wait a minute. SMB is running over TCP and TCP timeout is typically 20 seconds and SMB uses a couple of them before determining the cluster node failed. So how come the recovery is ~10 seconds and not 40 or 60 seconds??”

    Enter Witness service.

    Witness service was created to enable faster recovery from unplanned failures, allowing the SMB client to not have to wait for TCP timeouts. Witness is a new service that is installed automatically with the failover clustering feature. When the SMB client initially connects to a cluster node, the SMB client notifies the Witness client, which is running on the same computer. The Witness client obtains a list of cluster nodes from the Witness service running on the cluster node it is connected to. The Witness client picks a different cluster node and issues a registration request to the Witness service on that cluster node. The Witness service then listens to cluster events related to the clustered file server the SMB client is connected to.

    If an unplanned failure occurs on the file server cluster node the SMB client is connected to, the Witness service on the other cluster node receives a notification from the cluster service. The Witness service notifies the Witness client, which in turns notifies the SMB client that the cluster node has failed. Upon receiving the Witness notification, the SMB client immediately starts reconnecting to a different file server cluster node, which significantly speeds up recovery from unplanned failures.

    You can examine the state of the Witness service across the cluster using the Get-SmbWitnessClient command. Notice that Get-SmbWitnessClient can be run on any cluster node and provides a cluster aggregate view of Witness service, similar to Get-SmbOpenFile and Get-SmbSessions. In an elevated PowerShell shell on one of the cluster nodes, do the following to:

    Get-SmbWitnessClient | select *

    State : RequestedNotifications

    ClientName : FSF-260403-10

    FileServerNodeName : FSF-260403-08

    IPAddress : 2001:4898:E0:32AF:3256:8C83:59E5:BDB5

    NetworkName : SMBFS

    NotificationsCancelled : 0

    NotificationsSent : 0

    QueuedNotifications : 0

    ResourcesMonitored : 1

    WitnessNodeName : FSF-260403-07

    Examining the above output (run before the unplanned failure), we can see the SMB client (FSF-260403-10) is currently connected to cluster node FSF-260403-08 (SMB connection) and has registered for witness notification for SMBFS with Witness service on FSF-260403-07.

    Looking at Event Viewer (Applications and Services Log – Microsoft – Windows – SMBWitnessClient – Operational) on the SMB Client computer, we see that the Witness client received notification for SMBFS. Since the cluster group was moved to FSF-260403-07, which is also the Witness node for the Witness client, the following event shows the Witness client unregistering from FSF-260403-07 and registering with FSF-260403-09.

    clip_image032

    Figure 15 – Witness event log

     

    Tips and Tricks

    Protecting file server services

    LanmanServer and LanmanWorkstation runs in service hosts with other services. In extreme cases other services running in the same service hosts can affect the availability of LanmanServer and LanmanWorkstation. You can configure these services to run in their own service host using the following commands:

    sc config lanmanserver type= own

    sc config lanmanworkstation type= own

    The computer needs to be restarted for this change to take effect.

    Loopback configurations

    Accessing a file share, that has continuously available property set, as a loopback share is not supported.

    For example, SQL Server or Hyper-V storing their data files on SMB file shares must run on computers that are not a member of the file server cluster for the SMB file shares.

    Using legacy tools

    When creating file shares, the continuous availability property is set by default on tools introduced in Windows Server 2012, including the new file share creation wizard and the New-SmbShare command. If you have automation built around using older tools, such as NET SHARE or Explorer or using the NET APIs the continuous availability property will not be set by default and these tools do not support setting it. To work around this issue you can set the following registry key, which will cause all shares to be created with the property set regardless if they support it or not:

    Set-ItemProperty -Path “HKLM:\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters” EnableCaAlways -Value 1 –Force

    Witness service

    By default the network traffic between the Witness Client and Witness Server requires mutual authentication and is signed. However the traffic is not encrypted, as it doesn’t contain any user data. It is possible to enable encryption of Witness network traffic.

    To configure the Witness client to send traffic encrypted, set the following registry key on each client:

    Set-ItemProperty -Path “HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters” WitnessFlags -Value 1 –Force

    To configure the Witness Service to not accept unencrypted traffic, set the following registry key on each cluster node:

    Set-ItemProperty -Path “HKLM:\SYSTEM\CurrentControlSet\Services\SMBWitness\Parameters” Flags -Value 1 –Force

    Disabling NetBios over TCP/IP

    I have seen disabling NetBios over TCP/IP speed up failover times. To disable NetBios over TCP/IP for an interface, do the following in Network Connections:

    · Select the interface you want to modify, right-click and select Properties

    · In interface properties, select Internet Protocol Version 4 (TCP/IPv4) and click Properties

    · In Internet Protocol Version 4 (TCP/IPv4) Properties, click Advanced

    · In Advanced TCP/IP Settings, click the WINS tab

    · On the WINS tab, select the Disable NetBIOS over TCP/IP radio button

    When disabling NetBIOS over TCP/IP it should be configured for all network interfaces on all cluster nodes.

    clip_image034

    Figure 16 – Disable NetBIOS over TCP/IP

    Disable 8.3 name generation

    SMB Transparent Failover does not support cluster disks with 8.3 name generation enabled. In Windows Server 2012 8.3 name generation is disabled by default on any data volumes created. However, if you import volumes created on down-level versions of Windows or by accident create the volume with 8.3 name generation enabled, SMB Transparent Failover will not work. An event will be logged in (Applications and Services Log – Microsoft – Windows – ResumeKeyFilter – Operational) notifying that it failed to attach to the volume because 8.3 name generation is enabled.

    You can use fsutil to query and setting the state of 8.3 name generation system-wide and on individual volumes. You can also use fsutil to remove previously generated short names from a volume.

    Conclusion

    I hope you enjoyed this introduction to SMB Transparent Failover and agree how this feature is useful to provide continued access despite needing to occasionally restart servers when performing software or hardware maintenance or in the unfortunate event where a cluster node fails. Providing continued access to file share during these events is extremely important, especially for workloads such as Microsoft Hyper-V and Microsoft SQL Server.

    I am looking forward to dive into Scale-Out File Server in a future post.

    Claus Joergensen

    Principal Program Manager

    Windows File Server Team

VSS for SMB File Shares

$
0
0

Note: this post originally appeared on https://aka.ms/clausjor by Claus Joergensen.

In the next generation of Windows Server, Windows Server 2012, Hyper-V introduces support for storing virtual machine files on SMB 3.0 file shares. This blog post contains more detail on the SMB 3.0 enhancements to support this scenario.

Volume Shadow Copy Service (VSS) is a framework that enables volume backups to be performed while applications on a system continue to write to the volumes. To support applications that store their data files on remote SMB file shares, we introduce a new feature called “VSS for SMB File Shares” in Windows Server 2012. This feature enables VSS-aware backup applications to perform application consistent shadow copies of VSS-aware server applications storing data on SMB 3.0 file shares. Prior to this feature, VSS only supported performing shadow copies of data stored on local volumes.

Technical Overview

VSS for SMB File Shares is an extension to the existing VSS infrastructure and consists of four parts:

• A new VSS provider named “File Share Shadow Copy Provider” (fssprov.dll). The File Share Shadow Copy Provider is invoked on the server running the VSS-aware application and manages shadow copies on remote Universal Naming Convention (UNC) paths where the application stores its data files. It relays the shadow copy request to File Share Shadow Copy Agents.

• A new VSS requestor named “File Share Shadow Copy Agent” (fssagent.dll). The File Share Shadow Copy Agent is invoked on the file server hosting the SMB 3.0 file shares (UNC path) storing the application’s data files. It manages file share to volume mappings and interacts with the file server’s VSS infrastructure to perform shadow copies of the volumes backing the SMB 3.0 file shares where the VSS-aware applications stores their data files.

• A new RPC protocol named “File Server Remote VSS Protocol” (MSFSRVP). The new File Share Shadow Copy Provider and the new File Share Shadow Copy Agent are using this new RPC based protocol to coordinate shadow copy requests of data stored on SMB file shares.

• Enhancements to the VSS infrastructure to support the new File Share Shadow Copy provider, including API updates.

The diagram below provides a high-level architecture of how VSS for SMB File Shares (red boxes) fits into the existing VSS infrastructure (blue boxes) and 3rd party requestors, writers and providers (green boxes).

image

The following steps describe the basic Shadow Copy sequence with VSS for SMB File Shares.

A. The Backup Server sends backup request to its Backup Agent (VSS Requestor)

B. The VSS Requestor gather writer information and normalizes UNC path(s)

C. The VSS Service retrieves the writer metadata information and returns it to the VSS requestor

D. The VSS Service sends Prepare Shadow Copy request to the VSS writers involved and the VSS writers flushes buffers and holds writes

E. The VSS Service sends the Shadow Copy creation request to the File Share Shadow Copy Provider for any UNC paths involved in the Shadow Copy Set

E.1. The File Share Shadow Copy Provider relays the Shadow Copy creation request to the File Share Shadow Copy Agent on each remote File Server involved in the Shadow Copy Set

E.2. The File Share Shadow Copy Agent initiates writer-less Shadow Copy creation request to the VSS Service on the File Server

E.3. The VSS Service on the File Server completes Shadow Copy request using the appropriate VSS hardware or system providers

E.4. The File Share Shadow Copy Agent returns the Shadow Copy path (Shadow Copy Share) to the File Share Shadow Copy Provider

F. Once Shadow Copy creation sequence completes on the Application Server, the VSS requestor on the Application Server can retrieve the Shadow Copy properties from the VSS Service

G. Based on the Shadow Copy device name from the Shadow Copy properties on the Application Server, the Backup Server can access the data on the Shadow Copy shares on the File Servers for backup. The Shadow Copy share will have the same permissions as the original share.

Once the Shadow Copy on the application server is released, the Shadow Copies and associated Shadow Copy shares on the file servers are destroyed.

If the shadow copy sequence fails at any point, the shadow copy sequence is aborted and the backup application will need to retry.

For additional details on processing a backup under VSS, see http://msdn.microsoft.com/en-us/library/aa384589(VS.85).aspx

For additional details on the File Server Remote VSS Protocol, which is being used by the File Share Shadow Copy Provider and File Server Shadow Copy Agent, see http://msdn.microsoft.com/en-us/library/hh554852(v=prot.10).aspx

Requirements and supported capabilities

VSS for SMB File Shares requires:

  • Application server and file server must be running Windows Server 2012
  • Application server and file server must be domain joined to the same Active Directory domain
  • The “File Server VSS Agent Service” role service must be enabled on the file server
  • The backup agent must run in a security context that has backup operators or administrators privileges on both application server and file server
  • The backup agent/application must run in a security context that has at least READ permission on file share data that is being backed up.
  • VSS for SMB File Shares can also work with 3rd party Network Attached Storage (NAS) appliances or similar solutions. These appliances or solutions must support SMB 3.0 and File Server Remote VSS Protocols.

    VSS for SMB File Shares support:

    • Application server configured as single server or in a failover cluster

     

  • File servers configured as a single server or in a failover cluster with continuously available or scale-out file shares
  • File shares with a single link DFS-Namespaces link target
  • VSS for SMB File Shares has the following limitations:
    • Unsupported VSS capabilities
    • Hardware transportable shadow copies
  • Writable shadow copies
  • VSS fast recovery, where a volume can be quickly reverted to a shadow copy
  • Client-Accessible shadow copies (Shadow Copy of Shared Folders)
  • Loopback configurations, where an application server is accessing its data on SMB file shares that are hosted on the same application server are unsupported
  • Hyper-V hosts based Shadow Copy of virtual machines, where the application in the virtual machine stores its data on SMB file shares is not supported.
  • Data on mount points below the root of the file share will not be included in the shadow copy
  • Shadow Copy shares do not support failover
  • Deployments

    The most common deployment of VSS for SMB File Shares is expected to be with Hyper-V, where a Hyper-V server is storing the virtual machine files on remote SMB file share.

    The following sections outlines some example deployments and describe the behavior of each deployment.

    Example 1: Single Hyper-V server and file server

    In this deployment there is a single Hyper-V server and a single file server, both un-clustered. The file server has two volumes attached to it, each with a file share. The virtual machine files for VM A are stored on \\fileserv\share1, which is backed by Volume 1. Some virtual machine files for VM B are stored on \\fileserv\share1, which is backed by Volume 1, and some are stored on \\fileserv\share2, which is backed by Volume 2. The virtual machine files for VM C are stored on \\fileserv\share2, which is backed by Volume 2.

    image

    When the backup operator performs a Shadow Copy of VM A, the Hyper-V VSS writer will add \\fileserv\share1 to the Shadow Copy set. Once ready, the File Share Shadow Copy Provider relays the Shadow Copy request to \\fileserv. On the file server, the File Share Shadow Copy Agent invokes the local VSS service to perform a Shadow Copy of Volume 1. Volume 2 will not be part of the Shadow Copy set, since only \\fileserv\share1 was reported by the VSS writer. When the Shadow Copy sequence is complete, a Shadow Copy share \\fileserv\share1@{GUID} will be available for the backup application to stream the backup data. Once the backup is complete, the backup application releases the Shadow Copy set and the associated Shadow Copies and Shadow Copy shares are destroyed.

    If the backup operator performs a Shadow Copy of VM B, the Hyper-V VSS writer will report both \\fileserv\share1 and \\fileserv\share2 in the Shadow Copy set. On the file server side, this will result in a Shadow Copy of both Volume 1 and Volume 2 and two Shadow Copy shares \\fileserv\share1@{GUID} and \\fileserv\share2@{GUID} are created.

    If the backup operator performs a Shadow Copy of VM A and VM B, again the Hyper-V VSS writer will report both \\fileserv\share1 and \\fileserv\share2 in the Shadow Copy set. On the file server side, this will result in a Shadow Copy of both volumes and creation of two Shadow Copy shares.

    Example 2: Two Hyper-V servers and a single file server

    In this deployment there are two Hyper-V server and a single file server, all un-clustered. The file server has two volumes attached to it, each with a file share. The virtual machine files for VM A are stored on \\fileserv\share1, which is backed by Volume 1. Some virtual machine files for VM B is stored on \\fileserv\share1, which is backed by Volume 1, and some are stored on \\fileserv\share2, which is backed by Volume 2. The virtual machine files for VM C are stored on \\fileserv\share2, which is backed by Volume 2.

    image

    When the backup operator performs a Shadow Copy of VM B, the Hyper-V VSS writer will report both \\fileserv\share1 and \\fileserv\share2 in the Shadow Copy set. Once ready, the File Share Shadow Copy Provider relays the Shadow Copy request to \\fileserv. On the file server, the File Share Shadow Copy Agent invokes the local VSS service to perform a Shadow Copy of Volume 1 and Volume 2, since both share1 and share2 are in the Shadow Copy set. When the Shadow Copy sequence is complete, two Shadow Copy shares \\fileserv\share1@{GUID} and \\fileserv\share2@{GUID} will be available for the backup application to stream the backup data. Once the backup is complete, the backup application releases the Shadow Copy set and the associated Shadow Copies and Shadow Copy shares are destroyed.

    In this deployment the backup operator cannot perform a Shadow Copy of VM A in combination of either VM B or VM C, as they are running on separate Hyper-V hosts. The backup operator can perform a Shadow Copy of VM B and VM C, since both are running Hyper-V server 2.

    It is also worth noting that the backup operator cannot perform Shadow Copy of VM A and VM B (or VM C) in parallel, since the VSS service on the file server can only perform one Shadow Copy at a time. Note that this restriction is only for the time it takes to create the Shadow Copies, not for the entire duration of the backup session.

    Example 3: Two Hyper-V servers and two file servers

    In this deployment there are two Hyper-V server and two file servers, all un-clustered. Each file server has a volume attached to it, each with a file share. The virtual machine files for VM A are stored on \\fileserv1\share, which is backed by Volume 1 on File Server 1. Some virtual machine files for VM B is stored on \\fileserv1\share, which is backed by Volume 1 on File Server 1, and some are stored on \\fileserv2\share, which is backed by Volume 1 on File Server 2. The virtual machine files for VM C are stored on \\fileserv2\share, which is backed by Volume 1 on File Server 2.

    image

    When the backup operator performs a Shadow Copy of VM B, the Hyper-V VSS writer will report both \\fileserv1\share and \\fileserv2\share in the Shadow Copy set. When ready, the File Share Shadow Copy provider relays a Shadow Copy request to both \\fileserv1 and \\fileserv2. On each file server, the File Share Shadow Copy Agent invokes the local VSS service to perform a Shadow Copy of the volume backing the file share. When the Shadow Copy sequence is complete, two Shadow Copy shares \\fileserv1\share1@{GUID} and \\fileserv2\share2@{GUID} will be available for the backup application to stream the backup data. Once the backup is complete, the backup application releases the Shadow Copy set and the associated Shadow Copies and Shadow Copy shares are destroyed on both file servers.

    Similar to the previous deployment example, the backup operator cannot perform a Shadow Copy that spans virtual machines across multiple Hyper-V servers.

    Similar to the previous deployment example, the backup operator cannot perform Shadow Copy of VM A and VM B in parallel, since the VSS service on file server 1 can only perform one Shadow Copy at a time. However, it is possible to perform Shadow Copy of VM A and VM C in parallel since the virtual machines files are stored on separate file servers.

    Example 4: Two Hyper-V servers and a File Server cluster

    In this deployment there are two Hyper-V servers and a cluster, configured as a Failover Cluster. The failover cluster has two cluster nodes, node1 and node2. The administrator has configured a file server cluster role, \\fs1, which is currently online on node1, with a single share, \\fs1\share, on volume 1. To utilize both cluster nodes, the administrator has configured a second file server cluster role, \\fs2, which is currently online on node2, with a single share, \\fs2\share, on volume 2.

    image

    When the backup operator performs a Shadow Copy of VM A, the Hyper-V VSS writer will report \\fs1\share in the Shadow Copy set. When ready, the File Share Shadow Copy Provider relays a Shadow Copy request to \\fs1. As part of the exchange between the File Share Shadow Copy Provider and the File Share Shadow Copy Agent, the Agent will inform the Provider of the physical computer name, node1, which is actually performing the Shadow Copy.

    On node1, the File Share Shadow Copy Agent invokes the local VSS service to perform a Shadow Copy of the volume backing the file share. When the Shadow Copy sequence is complete, a Shadow Copy share \\node1\share@{GUID} will be available for the backup application to stream the backup data. Notice the Shadow Copy share, \\node1\share@{GUID}, is scoped to the cluster node, node1, and not the virtual computer name, \\fs1.

    Once the backup is complete, the backup application releases the Shadow Copy set and the associated Shadow Copies and Shadow Copy shares are destroyed. If for some reason the file server cluster role is moved to, or fails over to, node2 before the backup sequence is complete, the Shadow Copy share and the Shadow Copy becomes invalid.If the file server cluster roles is moved back to node1 the Shadow Copy and the corresponding Shadow Copy share will become valid again.

    Example 5: Two Hyper-V servers and a Scale-Out File Server cluster

    In this deployment there are two Hyper-V servers and a cluster, configured as a Failover Cluster. The failover cluster has two cluster nodes, node1 and node2. The administrator has configured a scale-out file server cluster role. The scale-out file server cluster role is new in Windows Server 2012 and is different than the traditional file server cluster role in a number of ways:

    • Uses Clustered Shared Volumes, which is a cluster volume that is accessible on all cluster nodes

     

  • Uses Distributed Network Names, which means the virtual computer name is online on all cluster nodes
  • Uses scale-out file shares, which means the share is online on all cluster nodes
  • Uses the DNS round robin mechanism to distribute file server clients across cluster nodes
  • The administrator has configured a single Scale-Out File Server, \\sofs, with a single share, \\sofs\share, backed by a single CSV volume, CSV1. Because of DNS round robin, Hyper-V server 1 is accessing the virtual machine files for VM A, on \\sofs\share, through node1 and Hyper-V server 2 is accessing the virtual machine files for VM B and VM C, on \\sofs\share, through node2.

    image

    When the backup operator performs a Shadow Copy of VM B and C, the Hyper-V VSS writer will report \\sofs\share in the Shadow Copy set. When ready, the File Share Shadow Copy Provider relays a Shadow Copy request to \\sofs. As part of the exchange between the File Share Shadow Copy Provider and the File Share Shadow Copy Agent, the Agent will inform the Provider of the physical computer name which is actually performing the Shadow Copy. In this scenario, the physical computer name will be the name of the CSV coordinator node, and the File Share Shadow Copy Provider will connect to the cluster node that is currently the CSV coordinator node, which could be node1.

    On node1, the File Share Shadow Copy Agent invokes the local VSS service to perform a Shadow Copy of the CSV volume backing the file share. When the Shadow Copy sequence is complete, a Shadow Copy share \\node1\share@{GUID} will be available for the backup application to stream the backup data. Notice the Shadow Copy share, \\node1\share@{GUID}, is scoped to the cluster node, \\node1, and not the virtual computer name, \\fs, similar to example 4.

    Once the backup is complete, the backup application releases the Shadow Copy set and the associated Shadow Copies and Shadow Copy shares are destroyed. If for some reason node1 becomes unavailable before the backup sequence is complete, the Shadow Copy share and the Shadow Copy become invalid. Actions, such as moving the CSV coordinator for CSV1 to node 2, do not affect the Shadow Copy share.

    Installation and configuration

    This section contains information about installing and configuring the File Share Shadow Copy Provider and File Share Shadow Copy Agent.

    Installation of File Share Shadow Copy Provider

    The File Share Shadow Copy Provider is installed by default on all editions of Windows Server, so no further installation is necessary.

    Installation of File Share Shadow Copy Agent

    To install the File Share Shadow Copy Agent on the file server(s), do the following on each file server:

    Do the following, with administrative privileges, to install the File Server role and the File Server Shadow Copy Agent role service on each file server:

    GUI

    1. In the Server Manager Dashboard click Add roles and features

    2. In the Add Roles and Features Wizard

    a. In the Before you begin wizard page, click Next

    b. In the Select installation type wizard page, select Role-based or feature-based installation

    c. In the Select destination server wizard page, select the server where you want to install the File Share Shadow Copy Agent

    d. In the Select server roles wizard page:

    d.i. Expand File and Storage Services

    d.ii. Expand File Services

    d.iii. Check File Server

    d.iv. Check File Server VSS Agent Service

    e. In the Select features wizard page, click Next

    f. In the Confirm installation selections, verify File Server and File Server VSS Agent Service are listed, and click Install

    image

    PowerShell

    1. Start elevated Windows PowerShell (Run as Administrator)

    2. Run the following command:

    Add-WindowsFeature -Name File-Services,FS-VSS-Agent

    Add backup user to Backup Operators local group on file server

    The user context in which the shadow copy is performed must have the backup privilege on the remote file server(s) that are part of the shadow copy set.

    Commonly this is done by adding the user that is performing the shadow copy to the Backup Operators group on the file server(s).

    To add a user to the local Backup Operators group, do the following with administrative privileges on each file server:

    GUI

    1. In the Server Manager Dashboard click Tools and select Computer Management

    2. In Computer Management:

    a. Expand Local Users and Groups

    b. Expand Groups

    c. In the results pane, double click Backup Operators

    d. In the Backup Operators Properties page, click Add

    e. Type the username to add to the Backup Operators group, click OK

    f. In the Backup Operators Properties page, click OK

    g. Close Computer Management

    Windows PowerShell

    1. Start elevated Windows PowerShell (Run as Administrator)

    2. Run the following commands, adjusting user account and file server name to your environment:

    $objUser = [ADSI](“WinNT://domain/user“)

    $objGroup = [ADSI](“WinNT://fileserv/Backup Operators”)

    $objGroup.PSBase.Invoke(“Add”,$objUser.PSBase.Path)

    Perform a Shadow Copy

    To perform a Shadow Copy of an applications data that is stored on a file share, a VSS-aware backup application that supports VSS for SMB File Shares functionality must be used.

    Note: Windows Server Backup in Windows Server 2012 does not support VSS for SMB File Shares.

    The following section shows examples of performing a Shadow Copy of a virtual machine that has its data files stored on a SMB file share, using:

    • DISKSHADOW

     

  • Microsoft System Center Data Protection Manager 2012 SP1 CTP1
  • The following diagram illustrates the configuration of the setup used for the examples in this section:

    image

    The following details the configuration of the virtual machine:

    1. Start elevated Windows PowerShell (Run as Administrator) and do the following:

    PS C:\Users\administrator.SMBTEST> Get-VM | select VMName, State, Path | FL

    VMName : vm1

    State : Running

    Path : \\smbsofs\vm\vm1\vm1

    DISKSHADOW

    To perform a Shadow Copy of virtual machine using DISKSHADOW on the Hyper-V hosts (clausjor04):

    1. Start elevated Windows PowerShell (Run as Administrator) and do the following:

    PS C:\Users\administrator.SMBTEST> DISKSHADOW

    Microsoft DiskShadow version 1.0

    Copyright (C) 2012 Microsoft Corporation

    On computer: CLAUSJOR04, 5/30/2012 5:34:42 PM

    DISKSHADOW> Writer Verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}

    DISKSHADOW> Set Context Persistent

    DISKSHADOW> Set MetaData vm1backup.cab

    DISKSHADOW> Begin Backup

    DISKSHADOW> Add Volume \\smbsofs\vm\vm1

    DISKSHADOW> Create

    Alias VSS_SHADOW_1 for shadow ID {7b53b887-76e5-4db8-821d-6828e4cbe044} set as environment variable.

    Alias VSS_SHADOW_SET for shadow set ID {2bef895d-5d3f-4799-8368-f4bfc684e95b} set as environment variable.

    Querying all shadow copies with the shadow copy set ID {2bef895d-5d3f-4799-8368-f4bfc684e95b}

    * Shadow copy ID = {7b53b887-76e5-4db8-821d-6828e4cbe044} %VSS_SHADOW_1%

    – Shadow copy set: {2bef895d-5d3f-4799-8368-f4bfc684e95b} %VSS_SHADOW_SET%

    – Original count of shadow copies = 1

    – Original volume name: \\SMBSOFS\VM\ [volume not on this machine]

    – Creation time: 5/30/2012 5:35:52 PM

    – Shadow copy device name: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}

    – Originating machine: FSF-260403-09

    – Service machine: CLAUSJOR04.SMBTEST.stbtest.microsoft.com

    – Not exposed

    – Provider ID: {89300202-3cec-4981-9171-19f59559e0f2}

    – Attributes: No_Auto_Release Persistent FileShare

    Number of shadow copies listed: 1

    DISKSHADOW> End Backup

    The Writer Verify command specifies that the backup or restore operation must fail if the writer or component is not included. For more information see this TechNet article.

    The Set Context Persistent command (and attributes), highlighted in orange, sets the Shadow Copy to be persistent, meaning that it is up to the user or application to delete the Shadow Copy when done.

    The Set MetaData stores the metadata information for the Shadow Copy, which is needed for restore, in the specified file.

    The Add Volume command, highlighted in yellow, adds the UNC path to the Shadow Copy set. You can specify multiple paths by repeating the Add Volume command.

    The Create command, initiates the Shadow Copy. Once the Shadow Copy creation is complete, DISKSHADOW outputs the properties of the Shadow Copy. The Shadow Copy device name, highlighted in green, is the path for the Shadow Copy data, which we can copy to the backup store using XCOPY or similar tools.

    During the backup session, you can see the virtual machine status reporting “Backing up..” in Hyper-V Manager. The backup session starts with the CREATE command and ends with the END BACKUP command in the DISKSHADOW sequence above.

    clip_image018

    After the Shadow Copy is complete, we can browse the Shadow Copy share (Shadow Copy device name from above) and copy the data we want to back up to an alternate location:

    1. Start elevated Windows PowerShell (Run as Administrator) and do the following:

    PS C:\Users\administrator.SMBTEST> Get-ChildItem -Recurse -Path “\\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}“

    Directory: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}

    Mode LastWriteTime Length Name

    —- ————- —— —-

    d—- 5/30/2012 5:19 PM vm1

    Directory: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}\vm1

    Mode LastWriteTime Length Name

    —- ————- —— —-

    d—- 5/30/2012 5:19 PM vm1

    -a— 5/30/2012 5:35 PM 8837436928 vm1.vhd

    Directory: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}\vm1\vm1

    Mode LastWriteTime Length Name

    —- ————- —— —-

    d—- 5/30/2012 5:19 PM Virtual Machines

    Directory: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}\vm1\vm1\Virtual Machines

    Mode LastWriteTime Length Name

    —- ————- —— —-

    d—- 5/30/2012 5:19 PM 87B27972-46C2-406B-87A4-C3FFA1FB6822

    -a— 5/30/2012 5:35 PM 28800 87B27972-46C2-406B-87A4-C3FFA1FB6822.xml

    Directory: \\FSF-260403-09\VM@{F1C5E17A-4168-4611-9CD4-8366F9F935C3}\vm1\vm1\Virtual

    Machines\87B27972-46C2-406B-87A4-C3FFA1FB6822

    Mode LastWriteTime Length Name

    —- ————- —— —-

    -a— 5/30/2012 5:22 PM 2147602688 87B27972-46C2-406B-87A4-C3FFA1FB6822.bin

    -a— 5/30/2012 5:22 PM 20971520 87B27972-46C2-406B-87A4-C3FFA1FB6822.vsv

    Once we are done copying the data, we can go ahead and delete the Shadow Copy, as highlighted below:

    1. Start elevated Windows PowerShell (Run as Administrator) and do the following:

    PS C:\Users\administrator.SMBTEST> DISKSHADOW

    Microsoft DiskShadow version 1.0

    Copyright (C) 2012 Microsoft Corporation

    On computer: CLAUSJOR04, 5/30/2012 5:44:21 PM

    DISKSHADOW> Delete Shadows Volume \\smbsofs\vm

    Deleting shadow copy {7b53b887-76e5-4db8-821d-6828e4cbe044} on volume \\SMBSOFS\VM\ from provider {89300202-3cec-4981-91

    71-19f59559e0f2} [Attributes: 0x04400009]…

    Number of shadow copies deleted: 1

    Restore data from a Shadow Copy

    To restore the virtual machine data from the backup store back to its original location:

    1. Start elevated Windows PowerShell (Run as Administrator) and do the following:

    DISKSHADOW> Set Context Persistent

    DISKSHADOW> Load MetaData vm1backup.cab

    DISKSHADOW> Begin Restore

    DISKSHADOW> //xcopy files from backup store to the original location

    DISKSHADOW> End Restore

    The Load MetaData command loads the metadata information for the Shadow Copy, which is needed for restore, from the specified file.

    After issuing the Begin Restore command, you can copy the virtual machine files from the backup store to the original location (\\smbsofs\vm\vm1). See this TechNet article for more information on XCOPY restore of Hyper-V

    Data Protection Manager 2012 SP1 CTP1

    To perform data protection with Microsoft System Center Data Protection Manager 2012 SP1 CTP1 (DPM), we create a new protection group that includes the virtual machine we want to protect. After installing the DPM agent on the Hyper-V server and allocate some disk to the storage pool, we can create a protection group using the following steps:

    1. In the System Center 2012 DPM Administrator Console, select Protection

    2. In the Protection view, select New

    3. In Create New Protection Group wizard, do the following

    a. In Welcome, click Next

    b. In Select Protection Group Type, select Servers

    c. In Select Group Members

    c.i. Locate the server where the VM is running

    c.ii. Expand the Hyper-V node

    c.iii. Select the virtual machine you want to backup (see screenshot below)

    c.iv. Click Next

    d. In Select Data Protection Method

    d.i. Enter a protection group name

    d.ii. Select I want short-term protection using Disk

    d.iii. Click Next

    d.e. Complete the remainder of the wizard using defaults

    clip_image020

    When the protection group is created and the initial replica is completed, you should see the following in the DPM Administrator Console:

    clip_image022

    If you inspect the application server during the initial replica using DISKSHADOW, you will be able to see the Shadow Copy in progress. The following shows the list shadows all during the creation of the initial replica:

    DISKSHADOW> list shadows all

    Querying all shadow copies on the computer …

    * Shadow copy ID = {c0024211-bd08-4374-ac47-399df2d20075} <No Alias>

    – Shadow copy set: {28e88c97-f5b1-4124-ae7b-83f5600d54ff} <No Alias>

    – Original count of shadow copies = 1

    – Original volume name: \\SMBSOFS.SMBTEST.STBTEST.MICROSOFT.COM\VM\ [volume not on this machine]

    – Creation time: 5/30/2012 6:49:35 PM

    – Shadow copy device name: \\FSF-260403-09.SMBTEST.STBTEST.MICROSOFT.COM\VM@{B6995DEF-A951-4379-9A3E-0B3

    619FB9A6A}

    – Originating machine: FSF-260403-09

    – Service machine: CLAUSJOR04.SMBTEST.stbtest.microsoft.com

    – Not exposed

    – Provider ID: {89300202-3cec-4981-9171-19f59559e0f2}

    – Attributes: Auto_Release FileShare

    Number of shadow copies listed: 1

    The highlighted in yellow is the remote UNC path which DPM specified for the Shadow Copy. The highlighted in green is the Shadow Copy device name, where DPM access the Shadow Copy data for replication. The highlighted in orange are the attributes used when DPM created the Shadow Copy. In this case the Shadow Copy is auto-release, meaning that the Shadow Copy is automatically released and deleted once DPM stops using it.

    Tips and Tricks
    Event logs

    VSS for SMB File Shares logs events for the Agent and Provider respectively. The event logs can be found on this path in Event Viewer:

    • Microsoft-Windows-FileShareShadowCopyProvider

     

  • Microsoft-Windows-FileShareShadowCopyAgent
  • Encryption

    By default the network traffic between the computer running the VSS provider and the computer running the VSS Agent Service requires mutual authentication and is signed. However the traffic is not encrypted, as it doesn’t contain any user data. It is possible to enable encryption of network traffic.

    You can control this behavior using Group Policy (gpedit.msc) in “Local Computer Policy->Administrator Templates->System->File Share Shadow Copy Provider”. You can also configure it in a Group Policy Object in the Active Directory domain.

    Garbage collecting orphaned Shadow Copies

    In case of unexpected computer restarts or similar events on the application server after the Shadow Copy has been created on the file server, some Shadow Copies may be left as orphaned on the file server. It is important to remove these Shadow Copies to ensure best possible system performance. By default the VSS Agent Service will remove Shadow Copies older than 24 hours.

    You can control this behavior using Group Policy in “Local Computer Policy->Administrator Templates->System->File Share Shadow Copy Agent”. You can also configure it in a Group Policy Object in the Active Directory domain.

    Long running shadow copies

    The VSS Agent Service maintains a sequence timer during Shadow Copy creation requested by an application server. By default the VSS Agent Service will abort a Shadow Copy sequence if it doesn’t complete in 30 minutes to ensure other application servers are not blocked for extended period of time. To configure the file server to use a different value through the registry, set the following registry on each file server

    Set-ItemProperty -Path “HKLM: SYSTEM\CurrentControlSet\Services\fssagent\Settings” LongWriterOperationTimeoutInSeconds-Value 1800 –Force

    If you are using a Scale-Out File Server, it may be necessary to adjust the cluster property SharedVolumeVssWriterOperationTimeout as well. The default value is 1800 seconds (minimum is 60 seconds and maximum is 7200 seconds). The backup user is expected to tweak this value based on the expected time for the VSS writer operations during PrepareForSnapshot and PostSnapshot calls (whichever is higher). For example, if a VSS writer is expected to take up to 10 minutes during PrepareForSnapshot and up to 20 minutes during PostSnapshot, the recommended value for SharedVolumeVssWriterOperationTimeout would be 1200 seconds.

    Accessing file shares using IP addresses

    In general you should use hostnames (e.g. \\fileserver\share\) or fully qualified domain names (e.g. \\fileserver.smbtest.stbtest.microsoft.com\share\) when configuring your application server to use SMB file shares. If for some reason you need to use IP addresses (e.g. \\192.168.1.1\share\), then the following are supported:

    Note: DNS reverse lookup (IP address to host name) must be available to successfully use IP addresses).

    IPv4:

    Strict four-part dotted-decimal notation, e.g.\\192.168.1.1\share

    IPv6:

    1. Global IPv6 and its literal format, e.g.,

    \\2001:4898:2a:3:2c03:8347:8ded:2d5b\share

    \\2001-4898-2a-3-2c03-8347-8ded-2d5b.ipv6-literal.net\share

    2. Site Local IPv6 format (Start with FEC0: ) and its literal format

    \\fec0::1fd9:ebee:ea74:ffd8%1\share

    \\fec0–1fd9-ebee-ea74-ffd8s1.ipv6-literal.net\share

    3. IPv6 tunnel address and its literal format

    \\2001:4898:0:fff:0:5efe:172.30.182.42\share

    \\2001-4898-0-fff-0-5efe-172.30.182.42.ipv6-literal.net\share

    IPv6 Link Local addresses (Starts with FE80:) are not supported.

    Conclusion

    I hope you enjoyed this introduction to VSS for SMB File Shares and agree how this feature is useful to being able to provide backup of application servers that store their data files on SMB file shares, which includes host-based backup of Hyper-V computers storing virtual machines on SMB file shares.

    Claus Joergensen

    Principal Program Manager

    Windows File Server Team

A developer’s view on VSS for SMB File Shares

$
0
0

Note: this post originally appeared on https://aka.ms/clausjor by Claus Joergensen.

VSS for SMB File Shares is an extension to the existing VSS infrastructure which consists of four parts:

image

In this post I want to look at VSS for SMB File Shares from a developers point of view and how to support it with minimal changes to 3rd party VSS requestors, writers and providers, as well as look at how to investigate backup issues with betest, vsstrace and Microsoft Netmon trace tools.

Hopefully you have already read the blog covering technical overview and deployment scenarios of VSS for SMB File Shares. For more technical details of this feature, please refer to Molly Brown’s talk at Storage Developers Conference 2012 and the MS-FSRVP protocol document.

VSS changes

The summary of backup application related VSS changes are listed below.

  • VSS Requestor

 

  • COM Security
  • Support UNC path
  • UNC Path normalization
  • VSS Writer
  • VSS Providers
  • VSS requestor

    Requestor is one of the key VSS components that drives the application consistent shadow copy creation and backup/restore.

    COM Security

    As a COM client to VSS coordinator service, the VSS requestor compatible with this feature must satisfy the following security requirements:

  • Run under a user account with local administrator or backup operator privilege on both application server and file servers.
  • Enable Impersonation and Cloaking so that the user token running backup application can be authenticated on the file server.
  • Below is a sample code to achieve this. More detail can be found in Security Considerations for Requestors.

    // Initialize COM security.
    CoInitializeSecurity(
    NULL,                           //  PSECURITY_DESCRIPTOR         pSecDesc,
    -1,                             //  LONG                         cAuthSvc,
    NULL,                           //  SOLE_AUTHENTICATION_SERVICE *asAuthSvc,
    NULL,                           //  void                        *pReserved1,
    RPC_C_AUTHN_LEVEL_PKT_PRIVACY,  //  DWORD                       dwAuthnLevel,
    RPC_C_IMP_LEVEL_IMPERSONATE,    //  DWORD                       dwImpLevel,
    NULL,                           //  void                        *pAuthList,
    EOAC_STATIC_CLOAKING,           //  DWORD                        dwCapabilities,
    NULL                            //  void                        *pReserved3
    );

    Support UNC path

    Prior to this feature, VSS only supported requestor adding shadow copies of data stored on local volume. With the new “File Share Shadow Copy Provider” available by default on all Windows Server 2012 editions, VSS now allows adding SMB UNC share path to a shadow copy set by calling the same IVssBackupComponents::AddToSnapshotSet method. Below is a simple code snippet to add a SMB UNC share path \\server1\guests to the shadow copy set.

    VSS_ID shadowCopySetId = GUID_NULL;

    VSS_ID shadowCopyId = GUID_NULL;

    CComPtr <IVssBackupComponents> spBackup;

    LPWSTR pwszPath == L“\\server1\guests”

    spBackup = CreateVssBackupComponents (VssBackupComponents);

    spBackup->StartSnapshotSet(&shadowCopySetId);

    spBackup->AddToSnapshotSet(pwszPath, GUID_NULL, &shadowCopyId);

    UNC Path Normalization

    Similar to local volume, uniquely identify a UNC share path in a shadow copy set is key to VSS-based shadow copy creation and backup/recovery. An UNC share path is composed of two parts: server name and share name. For the same share, the server name part of a UNC path can be configured in writer component in different formats below with many variations.

  • Host name
  • FQDN (Fully Qualified Domain Name)
  • IPv4
  • IPv6 or IPv6 literal format
  • This feature adds a new method IVssBackupComponents4::GetRootAndLogicalPrefixPaths that normalizes a given UNC share path to its unique root path and logical prefix format. The unique root path and logical prefix path are designed to be used for shadow copy creation and backup/restore file path formation, respectively.

    Note that a VSS requestor need to:

  • Call IVssBackupComponents::AddToSnapshotSet with unique root path of UNC share in hostname or FQDN format but not IPv4 or IPv6 address format.

    If a UNC share path is in IPv4 or IPv6 address format, its root path must be normalized into hostname or FQDN by calling IVssBackupComponents4::GetRootAndLogicalPrefixPaths.

     

  • Consistently normalize the file share UNC path into either host name or FQDN format in the same shadow copy set before IVssBackupComponents::AddToSnapshotSet.
  • Don’t call mix hostname or FQDN format in the same shadow copy set. Note the default root path format returned is hostname format. You can specify the optional 4th parameter to require FQDN format in the returned unique root path.
  • IPv4/IPv6 DNS reverse lookup must be configured appropriately on DNS infrastructure when normalizing UNC share path in IPv4/IPv6 format. Below are the examples to determine if your DNS server enables reverse IP address lookup for the specific address:
  • Example of IPv4 reverse look up is enabled on DNS. In this case, you can find the highlighted hostname/FQDN is resolved from IP address.

    F:\>ping -a 10.10.10.110

    Pinging filesever.contoso.com [10.10.10.110] with 32 bytes of data:

    Reply from 10.10.10.110: bytes=32 time=1ms TTL=57

    Reply from 10.10.10.110: bytes=32 time=1ms TTL=57

    Reply from 10.10.10.110: bytes=32 time=1ms TTL=57

    Example of IPv6 reverse lookup is not configured for 2001:4898:dc3:1005:21c:c4ff:fe68:e88. In this case, you can find that the hostname/FQDN cannot be resolved as the highlighted part is still IPv6 address.

    F:\>ping -a 2001:4898:dc3:1005:21c:c4ff:fe68:e88

    Pinging 2001:4898:dc3:1005:21c:c4ff:fe68:e88 with 32 bytes of data:

    Reply from 2001:4898:dc3:1005:21c:c4ff:fe68:e88: time=3ms

    Reply from 2001:4898:dc3:1005:21c:c4ff:fe68:e88: time=1ms

    Reply from 2001:4898:dc3:1005:21c:c4ff:fe68:e88: time=1ms

    If your DNS server does not have IPv6 DNS reverse lookup configured, you can manually add the IP address to hostname/FQDN mapping to your local DNS cache directly. To achieve this, open %SystemRoot%\system32\drivers\etc\hosts with notepad.exe from an elevated command window and add one line for each IP address to hostname/FQDN mapping as shown below. Most of the deployments we tested have IPv4 reverse DNS lookup available. But not all of them have IPv6 DNS reverse lookup configured.

    2001:4898:2a:3:6c94:5149:2f9c:f083 fileserver.contoso.com

    2001:4898:2a:3:6c94:5149:2f9c:f083 fileserver

    To unify the normalization of all VSS supported paths, this API also supports

  • DFS-N path pointing to another physical server, which returns the fully evaluated physical share path on the DFS target server.
  • Local volume, which returns the results of GetVolumePathName and GetVolumeNameForVolumeMountPoint on the input path.
  • Below is a sample requestor code snippet to illustrate how backup applications use IVssBackupComponentsEx4::GetRootAndLogicalPrefixPaths to normalize root path and logical prefix path for shadow copy creation and backup of files under DFS-N path.

    #define CHK_COM(X) \

    {\

        hr = X; \

        if (FAILED(hr)) \

    {\

    wprintf(L”COM Call %S failed: 0x%08lx”, #X, hr );\

    goto _Exit;\

    }\

    }

    CComPtr <IVssBackupComponents> spBackup;

    CComPtr<IVssBackupComponentsEx4>spBackup4;

    CComPtr <IVssAsync> spAsync;

    VSS_ID shadowCopySetId = GUID_NULL;

    VSS_ID shadowCopyId = GUID_NULL;

    VSS_SNAPSHOT_PROP shadowCopyProp;

    // GatherWriterMetadata and metadata enumeration are not shown below.

    // Instead, we assumes one writer component path to be shadow copied and backed up.

    // \\dfsworld\logical\path\guests is a DFS-N link to \\server1\guests

    // vm1 is a directory under \\server1\guests which contains the vhd file vm1.vhd that needs to

    // be backed up.

    LPWSTR pwszWriterComponentPath = L“\\dfsworld\logical\path\guests\vm1”;

    LPWSTR pwszWriterComponentFileSpec = L“vm1.vhd”;

    LPWSTR pwszRootPath = NULL;

    LPWSTR pwszLogicalPrefix = NULL;

    HRESULT hr = S_OK;

    CHK_COM(CreateVssBackupComponents (&spBackup));

    CHK_COM(spBackup->SafeQI(IVssBackupComponentsEx4, &spBackup4));

    // The caller must free the pwszRootPath and pwszLogicalPrefix strings allocated by GetRootAndLogicalPrefix()

    CHK_COM(spBackup->GetRootAndLogicalPrefixPaths(writerComponentPath, &pwszRootPath, &pwszLogicalPrefix));

    // pwszRootPath == “\\server1\guests”

    // pwszLogicalPrefix == “\\dfsworld\logical\path\guests”

    CHK_COM(spBackup->StartSnapshotSet(&shadowCopySetId));

    CHK_COM(spBackup->AddToSnapshotSet(rootPath, GUID_NULL, &shadowCopyId));

    CHK_COM(spBackup->DoSnapshotSet(&spAsync));

    CHK_COM(spAsync->Wait());

    CHK_COM(spBackup->GetSnapshotProperties(shadowCopyId, &shadowCopyProp));

    // Build snapshot relative path for file copy

    // 1. Initialize it to shadow copy share path \\server1\guests@{guid}

    CComBSTR shadowCopyPath = shadowCopyProp->m_wszSnapshotDevicePath;

    // 2. Get the relative path below the logical prefix path, “\vm1\vm1.vhd”

    //This is based on the logical prefix path returned from GetRootAndLogicalPrefixPaths,

    CComBSTR relativePath = writerComponentPath + wcslen(logicalPrefix);

    // 3. Append the relativePath and the wrierComponentFileSpec \\server1\guest@{guid}\vm1\

    shadowCopyPath += relativePath;

    // For each file (backupItem) that matches the file spec under the shadowCopyPath, copy the file to backup store

    // \\server1\guest@{guid}\vm1\vm1.vhd

    CComBSTR backupItem = shadowCopyPath+writerComponentFileSpec;

    HANDLE h = CreateFile(

    backupItem,

    );

    // BackupRead and Copy from backItem

    _Exit:

    // The caller must free the string allocated by GetRootAndLogicalPrefix()

    if (pwszRootPath!=NULL)

    CoTaskMemFree(&pwszRootPath);

    If (pwszLogicalPrefix!=NULL)

    ContaskMemFree(&pwszLogicalPrefix);

    VSS writer

    There is no change needed from VSS writer to support this feature as long as the application itself allows storing application data on SMB share.

    VSS provider

    On application server, the new file share shadow copy provider handles the file share shadow copy life cycle management. There is no change needed from existing VSS software/hardware provider to support this feature.

    Developer tools

    Betest for backup/restore Hyper-V over SMB

    BETest is a VSS requester that tests advanced backup and restore operations. It is included in the Microsoft Windows Software Development Kit (SDK) since Windows Vista and later. However, only the BETest in Windows Server 2012 SDK has been extended to support this feature. Full detail about betest can be found in the BETest document.

    Below we give an example to online backup and restore Hyper-V VM running over SMB shares. Please refer to the Hyper-V over SMB guide to configure Hyper-V over SMB.

    1. Download Windows 8 SDK which includes BETest tool.

    2. Get the VMId of the Hyper-V VM to be backed up by running powershell cmdlet. Assuming the VM to be backed up is named test-vm10, you run:

    PS C:\test> get-vm -name test-vm10|select vmid

    VMId

    —-

    c8a460ae-8aa2-4219-8c4f-532479fb854a

    3. Create the backup component file vm10.xml for backup.

    The Hyper-V writer ID is a GUID constant {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}. Each VM is handled by Hyper-V writer individually as a separate component with a componentName equals to its VMId. To select more VMs for backup, you can repeat step 2 to get all the VMIDs and put each into <Component> </Component>.

    <BETest>

    <Writer writerid=”66841cd4-6ded-4f4b-8f17-fd23f8ddc3de”>

    <Component componentName=”c8a460ae-8aa2-4219-8c4f-532479fb854a”></Component>

    </Writer>

    </BETest>

    4. Create a shadow copy and Full backup based on the components selected by vm10.xml.

    Upon completion of execution, the application consistent Hyper-V VM VHD and VM configuration file for VM (test-vm10) will be snapshotted and copied to backup destination specified with “/D” parameter (C:\backupstore). A new backup document vm10backup.xml specified by /S parameter will be created as well. It will be used with the file in the backup destination for restore later.

    betest /b /V /T FULL /D C:\backupstore /X vm10.xml /S vm10backup.xml

    If you just want to create the snapshot for testing purpose without waiting for the long file copying, you can just run the following command without /D and /S parameters like below.

    betest /b /V /T FULL /X vm10.xml

    During step 4, the betest output includes lots of writer information during shadow copy creation and backup. The key to determine if the backup succeeds or not is to check writer status at the end of the output as highlighted below after backup complete. You need to rerun step 4 until you get an application consistent backup indicated by STABLE writer state after backup. If you keep getting errors during step 4, vsstrace and netmon traces discussed below are the most useful tools to help investigation.

    status After Backup Complete (14 writers)

    Status for writer Task Scheduler Writer: STABLE(0x00000000)

    Status for writer VSS Metadata Store Writer: STABLE(0x00000000)

    Status for writer Performance Counters Writer: STABLE(0x00000000)

    Status for writer Microsoft Hyper-V VSS Writer: STABLE(0x00000000)

    Status for writer System Writer: STABLE(0x00000000)

    Application error: (0; <unknown error>)

    Status for writer ASR Writer: STABLE(0x00000000)

    Application error: (0; <unknown error>)

    Status for writer BITS Writer: STABLE(0x00000000)

    Application error: (0; <unknown error>)

    Status for writer Shadow Copy Optimization Writer: STABLE(0x00000000)

    Application error: (0; <unknown error>)

    Status for writer Dedup Writer: STABLE(0x00000000)

    Application error: (0; <unknown error>)

    Status for writer Registry Writer: STABLE(0x00000000)

    Application error: (0; <unknown error>)

    Status for writer COM+ REGDB Writer: STABLE(0x00000000)

    Application error: (0; <unknown error>)

    Status for writer WMI Writer: STABLE(0x00000000)

    Application error: (0; <unknown error>)

    Status for writer Cluster Database: STABLE(0x00000000)

    Application error: (0; <unknown error>)

    Status for writer Cluster Shared Volume VSS Writer: STABLE(0x00000000)

    Application error: (0; <unknown error>)

    5. Restore the VM

    You can restore the VM by specifying the backup document using /S and the backup store location using /D parameter.

    betest /R /D c:\backupstore /S vm10backup.xml

    VSStrace for file share shadow copy

    This feature adds two major components to Windows Server 2012:

  • File Share Shadow Copy Provider running on the application server
  • File Share Shadow Copy Agent running on the file server or file server clusters
  • Both of them support detailed ETW traces that are compatible with existing VSSTrace tool included in Windows 8 SDK, which makes it easy to correlate File Share Provider VSS Provider/Agent activities with VSS infrastructure for trace analysis.

    To turn on logging on the application server, open an elevated command prompt and run

    Vsstrace.exe -f 0 +WRITER +COORD +FSSPROV +indent -o vssTrace_as.log

    To turn off tracing on the application server, go back to the command prompt on that machine and hitting “ctrl + c”. The log file vssTrace_as.log generated is a text file that contains detail information about activities of file share shadow copy provider, VSS and VSS writers.

    To turn on logging on the file server, open an elevated command prompt and run:

    Vsstrace.exe -f 0 +WRITER +COORD +FSSAGENT +indent -o vssTrace_fs.log

    To turn off tracing on the file server, go back to the command prompt on that machine and hitting “ctrl + c”. The log file vssTrace_fs.log generated is a text file that contains detail information about activities of file share shadow copy agent, VSS and VSS writers.

    If you hit Hyper-V host-based backup issue, it is useful to gather local VSS trace inside the guest OS. To turn on logging inside the guest OS, open an elevated command prompt in VM and run:

    Vsstrace.exe -f 0 +WRITER +COORD +SWPRV +indent -o vssTrace_guest.log

    To turn off tracing on the VM, go back to the command prompt on that machine and hitting “ctrl + c”. The log file vssTrace_guest.log generated is a text file that contains detail information about activities of VSS, VSS SYSTEM Provider and VSS writers.

    MS-FSRVP RPC protocol

    The Windows 2012 Server “File Share Shadow Copy Provider” and “File Share Shadow Copy Agent” communicate through a new RPC-based protocol called MS-FSRVP. Its open protocol architecture offers the flexibility to allow 3rd party ISV/IHV to implement their file share shadow copy agent RPC server on non-Windows servers and interop with VSS-based backup application running on Windows Server 2012. There are 13 protocol messages for shadow copy life cycle management. In addition to the protocol document available http://msdn.microsoft.com/en-us/library/hh554852(v=prot.10).aspx, an FSRVP netmon parser is provided to understand and investigate the protocol sequence issues.

    To trace the FSRVP activities with Netmon:

    1. Download and install Microsoft Netmon and parser package on the application server where the shadow copy and backup are initiated.

  • Netmon from http://www.microsoft.com/en-us/download/details.aspx?id=4865
  • Netmon parser from http://nmparsers.codeplex.com/
  • 2. Start the Netmon, click “Tools->Option” menu and active “Windows Profile” to enable FSRVP parser as shown below.

    clip_image002

    3. Start a new capture, key in “FSRVP” and apply the protocol filter in the Filter window

    4. Create file share shadow copy using diskshadow/betest or 3rd party backup software that is compatible with this feature.

    In the example below, I create a shadow copy for SMB share \\yxy-vm1\data on yxy-vm1 (file server) from yxy-vm2 (application server)

    C:\>diskshadow

    Microsoft DiskShadow version 1.0

    Copyright (C) 2012 Microsoft Corporation

    On computer: YXY-VM2, 7/2/2012 10:54:57 AM

    DISKSHADOW> add volume \\yxy-vm1\data

    DISKSHADOW> create

    Alias VSS_SHADOW_1 for shadow ID {3c36b6e5-ba12-4ba4-92c7-fa9cf1e35bcc} set as environment variable.

    Alias VSS_SHADOW_SET for shadow set ID {9c647280-b2db-40c7-b729-3b82fd71e851} set as environment variable.

    Querying all shadow copies with the shadow copy set ID {9c647280-b2db-40c7-b729-3b82fd71e851}

    * Shadow copy ID = {3c36b6e5-ba12-4ba4-92c7-fa9cf1e35bcc} %VSS_SHADOW_1%

    - Shadow copy set: {9c647280-b2db-40c7-b729-3b82fd71e851} %VSS_SHADOW_SET%

    - Original count of shadow copies = 1

    - Original volume name: \\YXY-VM1\DATA\ [volume not on this machine]

    - Creation time: 7/2/2012 10:56:02 AM

    - Shadow copy device name: \\YXY-VM1\DATA@{009F5DC8-856A-4102-9313-5BBB00024F29}

    - Originating machine: YXY-VM1

    - Service machine: yxy-vm2.dfsr-w8.nttest.microsoft.com

    - Not exposed

    - Provider ID: {89300202-3cec-4981-9171-19f59559e0f2}

    - Attributes: Auto_Release FileShare

    Number of shadow copies listed: 1

    5. As show in the Netmon trace below, the complete shadow copy creation sequence with FSRVP protocol includes IsPathSupported, GetSupportedVersion, SetCOntext, StartShadowCopySet, AddToShadowCopySet, PrepareShadowCopySet, CommitShadowCopySet, ExposeShadowCopySet, GetShareMapping and RecoveryCompleteShadowCopySet. If any error happens in the middle, the AbortShadowCopySet message will be sent to cancel the file server shadow copy processing.

    clip_image004

    Conclusion

    I hope this introduction makes it easier for backup application developers to add support for this feature, which provides backup of application servers that store their data files on SMB file shares.

    Xiaoyu Yao

    Software Development Engineer


    Additional resources

    Application consistent VSS snapshot creation workflow:

    http://msdn.microsoft.com/en-us/library/aa384589(v=vs.85).


Storage Spaces Direct

$
0
0

Note: this post originally appeared on https://aka.ms/clausjor by Claus Joergensen.

Storage Spaces Direct enables service providers and enterprises to use industry standard servers with local storage to build highly available and scalable software defined storage for private cloud. Enabling Storage Spaces to use local storage for highly available storage clusters enables use of storage device types that were not previously possible, such as SATA SSD to drive down cost of flash storage, or NVMe flash for even better performance.

The primary use case for Storage Spaces Direct is private cloud storage, either on-prem for enterprises or in hosted private clouds for service providers. Storage Spaces Direct has two deployment modes: private cloud hyper-converged, where Storage Spaces Direct and the hypervisor (Hyper-V) are on the same servers, or as private cloud storage where Storage Spaces Direct is disaggregated (separate storage cluster) from the hypervisor. A hyper-converged deployment groups compute and storage together. This simplifies the deployment model, with compute and storage scaled in lock step. A private cloud storage deployment separates the compute and storage resources. While a more complex deployment model, it enables scaling compute and storage independently and avoids over provisioning.

A server with local storage contains internal storage devices in the chassis, in a storage enclosure physically connected only to that server, or a combination of these. This contrasts our current model in Windows Server 2012 R2, where storage devices must be in one or more shared SAS storage enclosures and the enclosure must be physically connected to all servers in the highly available cluster.

Storage Spaces Direct uses the network connecting the cluster nodes to pool all storage devices across the nodes instead of the currently required shared SAS fabric. Shared SAS fabrics can be cumbersome and complex to deploy, especially as the number of cluster nodes increases, whereas an Ethernet network fabric is simpler and well known. Another advantage of using the Ethernet network fabric is that we can benefit from our investments in SMB Direct, which enables RDMA-enabled Ethernet network adapters for the storage traffic amongst the cluster nodes (so-called “east-west” storage traffic). Using RDMA-enabled Ethernet network adapters provides lower IO latency, which improves application performance and response times, and lowers CPU consumption, which leaves more CPU resources for the application. In addition SMB 3 Multichannel enables network fault tolerance and bandwidth aggregation in deployments with multiple network adapters.

Storage Spaces Direct can scale out more and much simply than a shared SAS fabric; the latter fabric scales to four server and four storage enclosures, with complex cabling needed for each node must be connected to each enclosure. Storage Spaces Direct is simply connected to Ethernet networks, which do not have this limitation. Storage Spaces Direct also makes expansion simpler: you complete it by simply connecting additional nodes to the network and adding them to the cluster. Expansion can be done in one node increments. Once the desired number of nodes has been added, the storage devices are added to the pool – increasing the storage capacity – and existing data can be rebalanced onto the new storage devices for improved performance with more storage devices available to service data requests.

Storage Spaces Direct uses ReFS, aggregated into the CSV file system (CSVFS) for cluster wide data access. Using ReFS provides significant enhancements over NTFS. In addition to the existing scale, availability, and checksum-based resiliency that ReFS provides, new optimizations increase performance for Storage Spaces Direct and Hyper-V. First, Hyper-V checkpoint merges are implemented by remapping data between checkpoints, eliminating the copying I/O that was previously required. This dramatically lowers the disk load incurred during Hyper-V backup operations. Next, the file zeroing that happens upon creation of a fixed VHDX file is replaced by a pure metadata operation (the zeroing isn’t deferred, it simply doesn’t happen), making fixed VHDX creation instant and drastically reducing VM deployment time. Finally, the zeroing I/O during dynamic VHDX expansion is eliminated, further decreasing and leveling out disk load. Using ReFS allows us to reduce the I/O load of Hyper-V workloads while increasing reliability and resiliency.

Storage Spaces Direct is managed using System Center or Windows PowerShell. We have updated System Center Virtual Machine Manager (SCVMM) to support Storage Spaces Direct, including bare-metal provisioning, cluster formation, and storage provisioning and management. We also updated System Center Operations Manager (SCOM) to interface with the reimagined storage management model. Traditional storage monitoring with System Center collects health information from individual components and performs state computation and health roll-up, meaning the management pack tightly couples with the version and storage technology. It also required highly knowledgeable management pack authors, making it harder for service providers and enterprises to enhance storage monitoring. The new storage monitoring model is based on a health service that is built into Storage Spaces Direct and focuses only on the relevant objects, such as storage subsystem, volumes and file shares, and does automatic remediation when possible. Alerts are presented only on the relevant objects and specify urgency, remediation action, and even automatically resolve when the issue is addressed.

We are collaborating closely with partners, including Cisco, Dell, Fujitsu, HP, Intel, Lenovo and Quanta, to define platforms, components and configurations providing a prescriptive Storage Spaces Direct solution for service providers and enterprises. We rigorously validate these configurations at the component, node and system level in order to provide the best possible user experience.

If you are anxious to try out Storage Spaces Direct, it is available in Windows Server 2016 Technical Preview 2, which you can download here. Instructions for deploying and setting up Storage Spaces Direct can be found here. Initial evaluation can be done using four or more generation 2 virtual machines on Windows Server 2016 Technical Preview 2. An finally you can review my talk at Microsoft Ignite 2015 conference BRK3474 Enabling Private Cloud Storage using Servers with Local Disks.

Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15

$
0
0

Note: this post originally appeared on https://aka.ms/clausjor by Claus Joergensen.

This week I am at Intel Developer Forum 2015 in San Francisco, where we are showing a performance demo of Storage Spaces Direct Technical Preview with Intel NVM Express disk devices.

Storage Spaces Direct enables you to use industry standard servers with local storage and build highly available and scalable software-defined storage for private clouds. One of the key advantages of Storage Spaces Direct is its ability to use NVMe disk devices, which are solid-state disk devices attached through PCI Express (PCIe) bus.

We teamed up with Intel to build out a demo rack for Intel Developer Forum 2015 with a very colorful front panel:

Fig 1: Colorful front panel

The demo rack consists of the following hardware:

  • Intel® Server System R2224WTTYS-IDD (2U)

 

  • Boot Drive: 1 Intel® SSD DC S3710 Series (200 GB, 2.5” SFF)
  • Network per server
    • 1 Chelsio® 10GbE iWARP RDMA Card (CHELT520CRG1P10)

 

  • Intel® Ethernet Server Adapter X540-AT2 for management
  • Top of Rack Switch- Cisco Nexus 5548UP

 

Fig 2: Front of rack

Fig 3: Back of rack

The demo rack uses a hyper-converged software configuration, where Hyper-V and Storage Spaces Direct are the same cluster. Each server is running Windows Server 2016 Technical Preview 3. Storage Spaces Direct configuration:

  • Single pool
  • 16 virtual disks
  • 3-copy mirror
  • ReFS on-disk file system
  • CSVFS cluster file system
  • Each server is running eight virtual machines (128 total) as load generators. Each virtual machine is configured with:
    • 8 virtual cores
  • 7.5 GB memory
  • Compute equivalent to Azure A4 sizing
  • DISKSPD for load generation
  • 8 threads per DISKSPD instance
  • Queue Depth of 20 per thread
  • We are showcasing the demo rack with a few different workload profiles: 100% 4K read, 90%/10% 4K read/write and 70%/30% 4K read/write. We are happy with the results given where we are in the development cycle.

    image

    The performance demonstration at IDF ’15 captures where we are with Storage Spaces Direct Technical Preview and demonstrates great collaboration between Intel and Microsoft. We also identified several areas where we can improve our software stack, and are looking forward to sharing future results as we work towards addressing these on our Storage Spaces Direct journey.

    We recorded a few videos so you can see the performance demonstration even if you do not attend IDF ’15.

    Introduction to Storage Spaces Direct

    100% 4K Read

    100% 4K Read with Storage Quality of Service (QoS)

    70%/30% 4K Read/Write with Storage Quality of Server (QoS)

    Cheers

    Claus

    Links and stuff:

    twitter: @ClausJor

    Storage Spaces Direct deployment guide: http://aka.ms/s2d-deploy

    Storage Spaces Direct feedback: s2d_feedback@microsoft.com

Storage Spaces Direct – Under the hood with the Software Storage Bus

$
0
0

Note: this post originally appeared on https://aka.ms/clausjor by Claus Joergensen.

Hello, Claus here again, this time at 30,000 feet on a plane going back to Denmark for my dad’s 80th birthday. I think it is time that we explore some of the inner workings of Storage Spaces Direct (S2D) – it is much more exciting that any movie in the entertainment system. We are going to look at the Software Storage Bus, which is the central nervous system of Storage Spaces Direct. If you don’t already know what Storage Spaces Direct is, please see my blog post introducing Storage Spaces Direct.

Software Storage Bus introduction

The Software Storage Bus (SSB) is a virtual storage bus spanning all the servers that make up the cluster. SSB essentially makes it possible for each server to see all disks across all servers in the cluster providing full mesh connectivity. SSB consists of two components on each server in the cluster; ClusPort and ClusBlft. ClusPort implements a virtual HBA that allows the node to connect to disk devices in all the other servers in the cluster. ClusBlft implements virtualization of the disk devices and enclosures in each server for ClusPort in other servers to connect to.

Figure 1: Windows Server storage stack with the Software Storage Bus in green.

SMB as transport

SSB uses SMB3 and SMB Direct as the transport for communication between the servers in the cluster. SSB uses a separate named instance of SMB in each server, which separates it from other consumers of SMB, such as CSVFS, to provide additional resiliency. Using SMB3 enables SSB to take advantage of the innovation we have done in SMB3, including SMB Multichannel and SMB Direct. SMB Multichannel can aggregate bandwidth across multiple network interfaces for higher throughput and provide resiliency to a failed network interface (for more information about SMB Multichannel go here). SMB Direct enables use of RDMA enabled network adapters, including iWARP and RoCE, which can dramatically lower the CPU overhead of doing IO over the network and reduce the latency to disk devices (for more information about SMB Direct go here). I did a demo at the Microsoft Ignite conference back in May showing the IOPS difference in a system with and without RDMA enabled (demo is towards the end of the presentation)

Software Storage Bus Bandwidth Management

SSB also implements a fair access algorithm that ensures fair device access from any server to protect against one server starving out other servers. It also implements an algorithm for IO prioritization that prioritizes Application IO, which usually is IO from virtual machines, over system IO, which usually would be rebalance or repair operations. However, at the same time it ensures that rebalance and repair operations can make forward progress. Finally, it implements an algorithm that de-randomizes IO going to rotational disk devices to drive a more sequential IO pattern on these devices, despite the IO coming from the application (virtual machines) being a random IO pattern.

Software Storage Bus Cache

Finally, SSB implements a caching mechanism, which we call Storage Bus Cache (SBC). SBC is scoped to each server (per node cache) and is agnostic to the storage pools and virtual disks defined in the system. SBC is resilient to failures as it sits underneath the virtual disk, which provides resiliency by writing data copies to different nodes. When S2D is enabled in a cluster, SBC identifies which devices to use a caching devices and which devices are capacity devices. Caching devices will, as the name suggest, cache data for the capacity devices, essentially creating hybrid disks. Once it has been determined if a device is a caching device or a capacity device, the capacity devices are bound to a caching device in a round robin manner, as shown in the diagram below. Rebinding will occur if there is a topology change, such as if a caching device fails.

Figure 2: Storage Bus Cache in a hybrid storage configuration with SATA SSD and SATA HDD

The behavior of the caching devices is determined by the actual disk configuration of the system and outlined in the table below:

image

In systems with rotational capacity devices (HDD), SBC will act as both a read and write cache. This is because there is a seek penalty on rotational disk devices. In systems with all flash devices (NVMe SSD + SATA SSD), SBC will only act as a write cache. Because the NVMe devices will absorb most of the writes in the system, it is possible to use mixed-use or even read-intensive SATA SSD devices, which can lower the overall cost of flash in the system. In systems with only a single tier of devices, such as an all NVMe system or all SATA SSD system, SBC will need to be disabled. For more details on how to configure SBC, please see the Storage Spaces Direct experience guide here.

SBC creates a special partition on each caching device that, by default, consumes all available capacity except 32GB. The 32GB is used for storage pool and virtual disk metadata. SBC uses memory for runtime data structures, about 10GB of memory per TB of caching devices in the node. For instance, a system with 4x 800GB caching devices requires about 32GB of memory to manage the cache in addition to what is the for the base operating systems and any hosted hyper-converged virtual machines.

I hope you enjoyed reading this as much as I enjoyed writing it. I still have a couple of hours left on my flight, maybe I should try and catch some sleep. Until next time.

Storage Spaces Direct in Technical Preview 4

$
0
0

Note: this post originally appeared on https://aka.ms/clausjor by Claus Joergensen.

Hello, Claus here again. This time we are going to take a look at a couple of the key enhancements to Storage Spaces Direct that are coming alive in Windows Server Technical Preview 4 namely Multi-Resilient Virtual Disks and ReFS Real-Time Tiering. These two combined solves two key issues that we have in Windows Server 2012/R2 Storage Spaces. The first issue is that parity spaces only works well for archival/backup workloads – it does not perform well enough for workloads such as virtual machines. The second issue is that the tiering mechanism is an ‘after the fact tiering’ in that the system collects information about hot and cold user data, but only moves this user data in and out of the faster tier as a scheduled task using this historical information.

I suggest reading my blog post on Software Storage Bus and the Storage Bus Cache, which contains important information about how the Software Storage Bus and Storage Bus Cache works, both of which sits underneath the virtual disks and file systems.

Multi-Resilient Virtual Disks

A multi-resilient virtual disk is a virtual disk, which has one part that is a mirror and another part that is erasure coded (parity).

Figure 1 Virtual disk with both mirror and parity tier.

To arrive at this configuration, the administrator defines two tiers, just like in Windows Server 2012 R2, however this time the tiers are defined by their resiliency setting rather than the media type. Let’s take a look at a PowerShell example for a system with SATA SSD and SATA HDD (the Technical Preview 4 deployment guide also includes an example for an all-flash system with NVMe + SATA SSD):

# 1: Enable Storage Spaces Direct

Enable-ClusterS2D

# 2: Create storage pool

New-StoragePool -StorageSubSystemFriendlyName *cluster* -FriendlyName S2D -ProvisioningTypeDefault Fixed -PhysicalDisk (Get-PhysicalDisk | ? CanPool -eq $true)

# The below step is not needed in a flat (single tier) storage configuration

Get-StoragePool S2D | Get-PhysicalDisk |? MediaType -eq SSD | Set-PhysicalDisk -Usage Journal

# 3: Define Storage Tiers

$MT = New-StorageTier –StoragePoolFriendlyName S2D -FriendlyName MT -MediaType HDD -ResiliencySettingName Mirror -PhysicalDiskRedundancy 2

$PT = New-StorageTier –StoragePoolFriendlyName S2D -FriendlyName PT -MediaType HDD -ResiliencySettingName Parity -PhysicalDiskRedundancy 2

# 4: Create Virtual Disk

New-Volume –StoragePoolFriendlyName S2D -FriendlyName <VirtualDiskName> -FileSystem CSVFS_ReFS -StorageTiers $MT,$PT -StorageTierSizes 100GB,900GB

The first two steps enable Storage Spaces Direct and creates the Storage Pool. In the third step we define the two tiers. Notice that we use the “ResiliencySettingName” parameter in the definition of the tiers, where the MT tier has “ResiliencySettingName” set to “Mirror” and the PT tier has “ResiliencySettingName” set to “Parity”. When we subsequently create the virtual disk we specify the size of each tier, in this case 100GB of mirror and 900GB of parity, for a total virtual disk size of 1TB. ReFS uses this information to control its write and tiering behavior (which I will discuss in the next section).

The overall footprint of this virtual disk on the pool is 100GB * 3 (for three copy mirror) + 900GB *1.57 (for 4+3 erasure coding), which totals ~1.7TB. Compare this to the overall footprint of a similar sized 3-copy would have a footprint of 3TB.

Also notice that we specified “MediaType” as HDD for both both tiers. If you are used to Windows Server 2012 R2 you would think that this is an error – but it is actually on purpose. For all intents and purposes the “MediaType” is irrelevant as the SSD devices are already used by the Software Storage Bus and Storage Bus Cache as discussed in this blog post.

ReFS Real-Time Tiering

Now that we have created a multi-resilient virtual disk lets discuss how ReFS operates on this virtual disk. ReFS always writes into the mirror tier. If the write is an update to data sitting in the parity tier, then the new write still goes into the mirror tier and the old data in the parity tier is invalidated. This behavior ensures that writes are always written as a mirror operation which is the best performing, especially for random IO workloads like virtual machines and requires the least CPU resources.

Figure 2 ReFS write and data rotation

The write will actually land in the Storage Bus Cache below the file system and virtual disk. The beauty of this arrangement is that there is not a fixed relation between the mirror tier and the caching devices, so if you happen to define a virtual disk with a mirror tier that is much larger than the actual working set for that virtual disk you are not wasting valuable resources.

ReFS will rotate data from the mirror tier into the parity tier in larger sequential chunks as needed and perform the erasure coding computation upon data rotation. As the data rotation occurs in larger chunks it will skip the write-back cache and be written directly to the capacity devices, which is OK since its sequential IO with a lot less impact on especially rotational disks. Also, the larger writes overwrite entire parity stripes, eliminating the need to do read-modify-write cycles that smaller writes to a parity Space would otherwise incur.

Conclusion

So they say you cannot have your cake and eat it too, however in this case you can have capacity efficacy with multi-resilient virtual disks and good performance with ReFS real-time tiering. These features are introduced in Technical Preview 4 and as we expect to continue to improve performance as we move towards Windows Server 2016.

Hardware options for evaluating Storage Spaces Direct in Technical Preview 4

$
0
0

Note: this post originally appeared on https://aka.ms/clausjor by Claus Joergensen.

Hello, Claus here again. Right now I am at terminal B9 at Copenhagen Airport starting my trip back to the United States. This time I would like to talk a bit about options for evaluating Storage Spaces Direct in Windows Server Technical Preview 4.

You have three options for evaluating Storage Spaces Direct in Technical Preview 4:

  1. Hyper-V Virtual machines
  2. Validated server configurations from our partners
  3. Existing hardware that meets the requirements

Hyper-V Virtual Machines

Using Hyper-V virtual machines is a quick and simple way to get started with Storage Spaces Direct. You can use it to get a basic understanding of how to set up and manage Storage Spaces Direct. Having said that, you will not be able to experience all features or the full performance of Storage Spaces Direct. To evaluate Storage Spaces Direct, you will need at least four virtual machines, each with at least two data disks. For more information on how to do this, see Testing Storage Spaces Direct using Windows Server 2016 virtual machines.

Note, make sure to not use the ‘processor compatibility’ option on Hyper-V virtual machines used for Storage Spaces Direct. Processor compatibility masks certain processor capabilities and will prevent using Storage Spaces Direct, even if the physical processor supports the required capabilities.

Validated server configurations from our partners

We are working closely with our hardware partners to define and validate server configurations for Storage Spaces Direct. Using these hardware configurations is the best option for evaluating Storage Spaces Direct as we are working closely with them to validate that these work well with Storage Space Direct and you can experience the full feature set as well as the performance potential.

Several of our partners are ready to help you. Below are links to our partners detailing the hardware configuration, how to purchase and deploy the hardware:

Dell

Fujitsu

Lenovo

Once the hardware and Windows Server Technical Preview 4 is deployed, see the Storage Spaces Direct experience guide to complete the deployment.

Existing Hardware

We highly recommend using server configurations from our partners, that are in process of being validated, as we have worked closely with them to ensure they function properly and provide the best overall experience. If it is not possible to use one of these configurations, you may be able to evaluate Storage Spaces Direct in Technical Preview 4 with your existing hardware if it meets the required hardware and configuration requirements.

Note: This is not a statement of support for your particular configurations; these requirements are current as of Technical Preview 4 and might change.

Configuration

Storage Spaces Direct requires at least four servers that are expected to be of the same configuration, meaning identical CPU and memory configuration, identical network interface cards, storage controllers and devices. The servers run the same software load and are configured as a Windows Server Failover Cluster.

Using at least four servers (up to 16) provides the best storage resiliency and availability, and it satisfies the requirements for both mirrored configurations with 2 or 3 copies of data and for dual parity with erasure coded data. The below table outlines the resiliency for these storage configurations:

image

CPU

The servers in a Storage Spaces Direct configuration are generally expected to be a dual-socket CPU configuration to provide for the best flexibility and equipped with modern CPUs (Intel® Xeon® Processor E5 v3 Family). The CPU requirements depend on the deployment mode.

In the disaggregated deployment mode (Scale-Out File Server mode), the CPU is primarily consumed by storage and network IO, but is also used by advanced storage operations such as erasure coding etc.

In the hyper-converged deployment mode (virtual machines hosted on the same cluster as S2D), the CPU will provide for the VM workload as well as the storage and networking requirements. This mode will generally require more CPU horsepower, so more cores and faster processers will provide for more VMs to be hosted on the system.

Memory

The recommend minimum is 128GB, which allows for the best memory performance (balance with the number of memory channels) and provides for memory to be used by the base operating system and the Software Storage Bus cache in Storage Spaces Direct. For more information on the Software Storage Bus Cache see this blog post.

The 128GB memory amount would provide for the disaggregated deployment mode or a hyper-converged deployment mode with a smaller number of VM. Hyper-converged deployments with larger number of VMs would require additional memory depending on the number of VMs and how much memory each VM consumes.

Network interface cards

Storage Spaces Direct requires a minimum of one 10GbE network interface card (NIC) per server.

Most configurations, like a general purpose hyper-converged configuration will perform most efficiently and reliably using 10+ GbE NIC with Remote Direct Memory Access (RDMA) capability. RDMA should be either RoCE (RDMA over Converged Ethernet) or iWARP (Internet Wide Area RDMA).

If the configuration is primarily for backup or archive like workloads (sequential large IO) it can be a 10GbE network interface card (NIC) without Remote Direct Memory Access (RDMA) capability.

In both cases, a single, dual-ported NIC provides the best performance and resiliency to network connectivity issues.

You can use the following commands to inspect the link speed and RDMA capabilities of your network adapter:

Get-NetAdapter | FT Name, InterfaceDescription, LinkSpeed -autosize
Name                 InterfaceDescription                       LinkSpeed
—-                 ——————–                       ———
SLOT 6 2             Mellanox ConnectX-3 Ethernet Adapter #2    10 Gbps
SLOT 6               Mellanox ConnectX-3 Ethernet Adapter       10 Gbps

Get-NetAdapterRDMA | FT Name, InterfaceDescription, Enabled -autosize
Name                 InterfaceDescription                    Enabled
—-                 ——————–                    ——-
SLOT 6 2             Mellanox ConnectX-3 Ethernet Adapter #2    True
SLOT 6               Mellanox ConnectX-3 Ethernet Adapter       True

 

Network configuration and switches

The simple diagram below captures the available switch configurations for a four server cluster with variations of NIC and NIC port. Network deployments #2 and #3 are strongly preferred for best network resiliency and performance. Each network interface of the server should be within its own subnet. Therefore, for diagram #1 there will be one IP subnet for the servers. For diagrams #2 and #3 there will be two IP subnets and each server will be connected to both. IP subnet separation is needed for the proper use of both network interfaces in the Windows Failover clustering configuration. If the RoCE is used, the physical switches must be configured properly to support RoCE and the appropriate DataCenter Bridging (DCB) and Priority Flow Control (PFC) settings.

The network switch depends on the choice of Network Interface (NIC) used at the server.

If non-RDMA NICs are chosen, then the network switch needs to meet basic Windows Server requirements for Network Layer 2 support.

For RDMA capable NICs, the RDMA type of the NIC will have additional requirements of the switch:

  • For iWARP capable NICs, same as non-RDMA capable NICs
  • For RoCE capable NICs, the network switches must provide Enhanced Traffic Selection (802.1Qaz), Priority Based Flow Control (802.1p/Q and 802.1Qbb)
  • Mappings of TC class markings between L2 domains must be configured between switches that carry RDMA traffic.
  • Storage Devices

    Each server in a Storage Spaces Direct configuration must have the same total number of storage devices and if the server has a mix of storage device types (e.g. SSD or HDD) then the number of the particular type of device must be the same in each server.

    If a storage configuration has both solid state disk (e.g. NVMe flash or SATA SSDs) and hard disk drives (HDDs), upon configuration, the solid state disk devices (performance devices) will be used for read and write-back caching and the hard disk drives (capacity devices) will be used for data storage. If a storage configuration is all solid state disk (e.g. NVMe SSD and SATA SSD), upon configuration, the NVMe SSD devices (performance devices) will be used for write-back caching and the SATA SSD (capacity device) will be used for data storage. See the Storage Spaces Direct experience guide for how to configure for these storage configurations.

    Each server must be configured with at least 2 performance devices and 4 capacity devices. The number of capacity devices must be a multiple of the number of performance devices. The below table shows some example configuration for servers with 12 drive bays

    image

    Depending on the server design the NVMe device (solid state disk connected via PCI Express) might either be seated directly into a PCIe slot or PCIe connectivity is extended into one or more drives bays enabling the NVMe device to be seated in those bays. Seating NVMe directly into PCIe allows the drive bays, otherwise used for NVMe devices, to host more capacity device for an increase in the total number of storage devices.

    It is important to use flash-based storage devices with sufficient endurance. Endurance is often expressed as ‘drive writes per day’ or DWPD. If the endurance is too low the flash devices are likely to fail, or significantly throttle write IO, sooner than expected. If the endurance is too high the flash devices will be fine, but you will have wasted money on these more expensive devices. Calculating the necessary endurance can be tricky, as there are many factors involved, including actual daily write churn, read operations resulting in read cache churn and write amplification as a result of the desired resiliency. Note that in an all-flash configuration with NVMe + SSD the NVMe devices will absorb the majority of the writes and thus SSDs with lower endurance can be used and will result in lower cost.

    Storage Device Connectivity

    Storage Spaces Direct supports three storage device attach types: NVMe, SATA or SAS. NVMe devices are connected via PCI Express (PCIe). For the SATA and SAS devices, these can be either SSDs or HDDs. All SATA and SAS devices must be attached to a SAS Host Bus Adapter (HBA). This HBA must be a “simple” HBA, which means the devices shows as SAS devices in Windows Server.

    The HBA must be attached to a SAS Expander and then the SATA or SAS devices must be attached to the SAS Expander. The following (very simplified) diagram, captures the basic idea of storage device connectivity.

    You can use the following command to inspect the storage device media type and bus type in your system:

    Get-PhysicalDisk | FT FriendlyName, Mediatype, BusType -autosize
    FriendlyName          MediaType BusType
    ————          ——— ——-
    NVMe INTEL SSDPEDMD01 SSD       NVMe
    NVMe INTEL SSDPEDMD01 SSD       NVMe
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS

    The devices must show with bustype as either SAS (even for SATA devices) or NVMe (you can ignore your boot and system devices). If the devices show with bustype SATA, it means they are connected via a SATA controller which is not supported. If the devices show as bustype RAID, it means it is not a “simple” HBA, but a RAID controller which is not supported. In addition the devices must show with the accurate media type as either HDD or SSD.

    In addition, all devices must have a unique disk signature. The devices must show a unique device ID for each device. If the devices show the same ID, the disk signature is not unique, which is not supported.

    Finally, the server or the external storage enclosure (if one is used) must meet the Windows Server requirements for Storage Enclosures.

    You can use the following command to inspect the presence of storage enclosures in your system:

    Get-StorageEnclosure | FT FriendlyName
    FriendlyName
    ————
    DP BP13G+EXP

    There must be at least one enclosure listed per SAS HBA in the cluster. If the system has any NVMe devices there will also be one per server, with vendor MSFT.

    In addition, all storage enclosures must have a unique ID. If the enclosures show the same ID, the enclosure ID is not unique, which is not supported.

    Storage Spaces Direct does not support Multipath I/O (MPIO) for SAS storage device configurations. Single port SAS device configurations can be used.

    Summary of requirements
    • All Servers and Server components must be Windows Server Certified
    • A minimum of 4 servers and a maximum of 16 servers
    • Dual-socket CPU and 128GB memory
    • 10GbE or better networking bitrate must be used for the NICs
    • RDMA capable NICs are strongly recommended for best performance and density
    • If RDMA capable NICs are used, the physical switch must meet the associated RDMA requirements
    • All servers must be connected to the same physical switch (or switches as per example 3 above)
    • Minimum 2 performance devices and 4 capacity devices per server with capacity devices being a multiple of performance devices
    • Simple HBA required for SAS and SATA device. RAID controllers or SAN/iSCSI devices are not supported.
    • All disk devices and enclosures must have a unique ID
    • MPIO or physically connecting disk via multiple paths is not supported
    • The storage devices in each server must have one of the following configurations
      • NVMe + SATA or SAS SSD
      • NVMe + SATA or SAS HDD
      • SATA or SAS SSD + SATA or SAS HDD

    That is it from me this time.

Work Folders for Android – Released

$
0
0

We are happy to announce that an Android app for Work Folders has been released into the Google PlayStore® and is available as a free download.

 

 

A screenshot of the Work Folders for Android app in the Google Playstore

Work Folders for Android app in the Google PlayStore

 

– There also is a version for iPad and an iPhone version.

 

Overview

Work Folders is a Windows Server feature since 20012R2 that allows individual employees to access their files securely from inside and outside the corporate environment. This app connects to it and enables file access on your Android Phone and Tablet. Work Folders enables this while allowing the organization’s IT department to fully secure that data.

This app features an intuitive UI, selective sync, end-to-end encryption and search.
In order to view files one can open-in the file in an appropriate app on the device.

Work Folders for Android - file browser

Work Folders for Android – file browser

 

 Work Folders App on Android – Features

 

  • Pin files for offline viewing.
    Saves storage space by fully showing all available files but locally storing and keeping in sync only the files you care about.
  • Files are stored encrypted at all times. On the wire and at rest on the device.
  • Access to the app is protected by an app passcode – keeping others out even if the device is left unlocked and unattended.
  • Allows for DIGEST and Active Directory Federation Services (ADFS) authentication mechanisms including multi factor authentication.
  • Search for files and folders
  • Open files in other apps that might be specialized to work with a certain file type.

 

Work Folders offers encryption at rest and super fast access on-demand. Quite difficult to take that screenshot :)

Work Folders offers encryption at rest and super fast access on-demand. Quite difficult to take that screenshot :)

 

Find files and folders as you type.

Find files and folders as you type.

 Some Android phone screenshots

We also support the phone form factor: Here showing offline access to pinned files.

We also support the phone form factor: Here showing offline access to pinned files.

Protecting your files via an app-passcode

Protecting your files via an app-passcode.

Android Version support

Work Folders for Android is supported on all devices running Android Version 4.4 KitKat and later.
At release we are covering therefore a little over 70% of the Android market and believe our coverage will increase over time.

 

Blogs and Links

If you’re interested in learning more about Work Folders, here are some great resources:

 

 

All the goods

 

Introduction and Getting Started

 

 

Advanced Work Folders Deployment and Management

 

 

Videos

 

Quick Survey: Your plans for WS 2016 block replication and Azure

$
0
0
Heya folks, Ned here again with a quick (only 4 questions) survey on how you plan to use block replication, Storage Replica, and Azure in the coming months after RTM of Windows Server 2016. Any feedback is highly appreciated. https://microsoft.qualtrics.com/SE/?SID=SV_0U6b2tbhVmaImnX Thanks!... Read more

Data Deduplication in Windows Server 2016

$
0
0

Since we introduced Data Deduplication (“Dedup” for short) in Windows Server 2012, the Dedup team has been hard at work improving this feature and our updates in Windows Server 2016 are no exception. When we started planning for Windows Server 2016, we heard very clearly from customers that performance and scale limitations prevented use in certain scenarios where the great space savings from Dedup would really be useful, so in Windows Server 2016, we focused our efforts on making sure that Dedup is highly performant and can run at scale. Here’s what’s new in 2016:

  1. Support for Volume Sizes Up to 64 TB
    In Windows Server 2012 R2, Dedup optimizes data using a single-threaded job and I/O queue for each volume. While this works well for a lot of scenarios, you have to consider the workload type and the volume size to ensure that the Dedup Processing Pipeline can keep up with the rate of data changes, or “churn”. Typically this means that Dedup doesn’t work well for volumes greater than 10 TB in size (or less for workloads with a high rate of data changes). In Windows Server 2016, we went back to the drawing board and fully redesigned the Dedup Processing Pipeline. We now run multiple threads in parallel using multiple I/O queues on a single volume, resulting in performance that was only possible before by dividing up data into multiple, smaller volumes:

    The result is that our volume guidance changes to a very simple statement: Use the volume size you need, up to 64TB.
  2. Support for File Sizes up to 1 TB
    While Windows Server 2012 R2 supports the use of file sizes up to 1TB, files “approaching” this size are noted as “not good candidates” for Data Deduplication. In Windows Server 2016, we made use of new stream map structures and improved our partial file optimization, to support Dedup for file sizes up to 1 TB. These changes also improve overall Deduplication performance.
  3. Simplified Dedup Deployment for Virtualized Backup Applications
    While running Dedup for virtualized backup applications, such as DPM, is a supported scenario in Windows Server 2012 R2, we have radically simplified the deployment steps for this scenario in 2016. We have combined all the relevant Dedup configuration settings needed to support virtualized backup applications into a new preset configuration type called, as you might expect, “Virtualized Backup Server” (“Backup” in PowerShell).
  4. Nano Server Support
    Nano Server is a new headless deployment option in Windows Server 2016 that has far small system resource footprint, starts up significantly faster, and requires fewer updates and restarts than the Windows Server Core deployment option. Data deduplication is fully supported on Nano Server!

Please let us know if you have any questions about Dedup in Windows Server 2016, and I’ll be happy to answer them! We also love to hear any feature requests or scenarios you’d like to see Dedup support (like support for Dedup on ReFS) in future versions of Windows Server, so feel free to send those to us as well! You can leave comments on this post, or you can send the Dedup team a message directly at dedupfeedback@microsoft.com. Thanks!

Deploy an entire Windows Server 2016 Software Defined Storage lab in minutes

$
0
0

Heya folks, Ned here again. Superstar Microsoft PFE Jaromir Kaspar recently posted a Windows Server 2016 lab creation tool on CodePlex and I highly recommend it. Download the Windows Server 2016 ISO file and bing-bang-bong, the tool kicks out an entire working datacenter of clusters, Storage Spaces Direct, Storage Replica, scale-out file servers, Nano servers, and more. All the Software Defined Storage you could ever want, with none of the boring build time – just straight into the usage and tinkering and learning. It’s a treat.

You just run the 1_prereq.ps1 file and it readies the lab. Then run 2_createparentdisks.ps1 and pick an ISO to create your VHDXs. Then use 3_deploy.ps1 to blast out everything. Voila. So simple, a Ned can do it.

image

Download from here. Steps and docs are here. Questions and comments go here. What are you waiting for?

– Ned “work smarter, not harder” Pyle

Data Deduplication in Windows Server 2016

$
0
0

Since we introduced Data Deduplication (“Dedup” for short) in Windows Server 2012, the Dedup team has been hard at work improving this feature and our updates in Windows Server 2016 are no exception. When we started planning for Windows Server 2016, we heard very clearly from customers that performance and scale limitations prevented use in certain scenarios where the great space savings from Dedup would really be useful, so in Windows Server 2016, we focused our efforts on making sure that Dedup is highly performant and can run at scale. Here’s what’s new in 2016:

  1. Support for Volume Sizes Up to 64 TB
    In Windows Server 2012 R2, Dedup optimizes data using a single-threaded job and I/O queue for each volume. While this works well for a lot of scenarios, you have to consider the workload type and the volume size to ensure that the Dedup Processing Pipeline can keep up with the rate of data changes, or “churn”. Typically this means that Dedup doesn’t work well for volumes greater than 10 TB in size (or less for workloads with a high rate of data changes). In Windows Server 2016, we went back to the drawing board and fully redesigned the Dedup Processing Pipeline. We now run multiple threads in parallel using multiple I/O queues on a single volume, resulting in performance that was only possible before by dividing up data into multiple, smaller volumes:

    The result is that our volume guidance changes to a very simple statement: Use the volume size you need, up to 64TB.
  2. Support for File Sizes up to 1 TB
    While Windows Server 2012 R2 supports the use of file sizes up to 1TB, files “approaching” this size are noted as “not good candidates” for Data Deduplication. In Windows Server 2016, we made use of new stream map structures and improved our partial file optimization, to support Dedup for file sizes up to 1 TB. These changes also improve overall Deduplication performance.
  3. Simplified Dedup Deployment for Virtualized Backup Applications
    While running Dedup for virtualized backup applications, such as DPM, is a supported scenario in Windows Server 2012 R2, we have radically simplified the deployment steps for this scenario in 2016. We have combined all the relevant Dedup configuration settings needed to support virtualized backup applications into a new preset configuration type called, as you might expect, “Virtualized Backup Server” (“Backup” in PowerShell).
  4. Nano Server Support
    Nano Server is a new headless deployment option in Windows Server 2016 that has far small system resource footprint, starts up significantly faster, and requires fewer updates and restarts than the Windows Server Core deployment option. Data deduplication is fully supported on Nano Server!

Please let us know if you have any questions about Dedup in Windows Server 2016, and I’ll be happy to answer them! We also love to hear any feature requests or scenarios you’d like to see Dedup support (like support for Dedup on ReFS) in future versions of Windows Server, so feel free to send those to us as well! You can leave comments on this post, or you can send the Dedup team a message directly at dedupfeedback@microsoft.com. Thanks!

Configuring Nano Server and Dedup

$
0
0

Data Deduplication is a feature of Windows Server since Windows Server 2012 that can reduce the on-disk footprint of your data by finding and removing duplication within a volume without compromising its fidelity or integrity. For more information on Data Deduplication, see the Data Deduplication Overview. Nano Server is a new headless deployment option in Windows Server 2016 that has a far small system resource footprint, starts up significantly faster, and requires fewer updates and restarts than the Windows Server Core deployment option. For more information on Nano Server, see the Getting Started with Nano Server TechNet page.

Data Deduplication is fully supported on Nano Server and while we think you’ll really love the benefits of deploying Windows Server and Dedup on the Nano Server, there are a few changes to get used to. This guide walks through a vanilla deployment of setting up Nano Server and Dedup in a test environment: this should give you the context needed for more complicated deployments.

Configuring Nano Server and Dedup

  1. Create a Nano Server Image. Unlike installing the Server Core or Full versions of Windows Server, Nano Server doesn’t have an installer option. The following commands will create a VHDX for use with Hyper-V or other hypervisor. You may also find it helpful to install Nano directly on a bare metal server. In that case, simply change the file extension on the path provided to the TargetPath parameter on the New-NanoServerImage cmdlet to “.wim” rather than “.vhdx”, and follow the steps in this guide instead of the next step. On Windows 10, open a new PowerShell window with Administrator privileges and copy the following:

    $iso = Mount-DiskImage -ImagePath Y:\ISOs\WindowsServer2016_TP4.ISO -StorageType ISO -PassThru

    $isoDriveLetter = ($iso | Get-Volume).DriveLetter + “:\”

    Remove-Item -Path C:\temp\NanoServer -Recurse -Force -ErrorAction SilentlyContinue

    New-Item -Path C:\temp\NanoServer -ItemType Directory -Force

    Set-Location ($isoDriveLetter + “NanoServer\”)

    Import-Module .\NanoServerImageGenerator.psm1

    $password = ConvertTo-SecureString -String “t3stp@ssword!” -AsPlainText -Force # Note: password safe for testing only

    New-NanoServerImage -GuestDrivers -MediaPath $isoDriveLetter -BasePath C:\temp\NanoServer\base -TargetPath C:\temp\NanoServer\csodedupnano.vhdx -MaxSize 50GB -Storage -Clustering -ComputerName “csodedupnano” -AdministratorPassword $password -EnableRemoteManagementPort

    Set-Location C:\temp\NanoServer

    Remove-Item -Path .\base -Recurse -Force # Clean-up

    Dismount-DiskImage -ImagePath $iso.ImagePath # Clean-up

  2. Create a new Virtual Machine (VM) in Hyper-V or your hypervisor of choice and configure Nano Server as desired. If you are not familiar creating a VM in Hyper-V, this guide should help get you started. You may also wish to join your new Nano Server VM to a domain before continue on: follow these steps for more information.
  3. Create a CimSession for Remoting to your Nano Server instance:

    $cred = New-Object System.Management.Automation.PSCredential -ArgumentList “csodedupnano”, $password

    $cimsession = New-CimSession -ComputerName “csodedupnano” -Credential $cred

  4. Create a new partition for Dedup on your Nano Server VM instance:

    $osPartition = Get-Partition -CimSession $cimsession -DriveLetter C

    Resize-Partition -CimSession $cimsession -DiskNumber $osPartition.DiskNumber -PartitionNumber $osPartition.PartitionNumber -Size 5GB

    $dedupPartition = New-Partition -CimSession $cimsession -DiskNumber $OsPartition.DiskNumber -UseMaximumSize -DriveLetter D

    Format-Volume -CimSession $cimsession -Partition $dedupPartition -FileSystem NTFS -NewFileSystemLabel “Shares” -UseLargeFRS

  5. Enable the Dedup Feature. On your Nano Server instance:

    Invoke-Command -ComputerName $cimsession.ComputerName -Credential $cred { dism /online /enable-feature /featurename:dedup-core /all }

    Enable-DedupVolume -CimSession $cimsession -Volume D: -UsageType Default

That’s it – you have now deployed Dedup on Nano Server! As always, please feel free to let us know if you have any questions, comments, or concerns with any of the content posted here either by commenting on this post, or sending the Dedup team a message directly at dedupfeedback@microsoft.com. Thanks!

DISKSPD now on GitHub, and the mysterious VMFLEET released

$
0
0

Hi folks, Ned here again with a brief announcement: DISKSPD, the Microsoft-built and recommended tool for testing and measuring your storage performance, is now available on GitHub under an MIT License. It can generate load, measure storage perf, and generally makes life easier when building and configuring storage with Windows Server. Quit using IOMETER and SQLIO, get with DISKPSD.

But wait, there’s more! This GitHub project also contains VMFLEET. This set of scripts allows you to run tons of Windows Server 2016 Storage Spaces Direct hyper-converged guests – a “fleet of VMs”, if you will – that then run DISKSPD workloads inside them. You can control the behaviors, quantities, IO patterns, etc. of all the VMs through a master control script. No, not that Master Control.

master-control-program-tron

Use VMFLEET for load and stress testing your servers. It’s an evolution of the same stuff we used at the Intel Developers Forum and Microsoft Ignite last year, and what we use internally. Only takes a few steps and you are grinding that storage. Check out the DOCX or PDF for more info.

For more on Storage Space Direct, get to http://aka.ms/s2d. Or just bother Claus on Twitter. He loves it, no matter what he says.

Until next time,

Ned “disk swabbie” Pyle

Viewing all 268 articles
Browse latest View live