Quantcast
Channel: Storage at Microsoft
Viewing all 268 articles
Browse latest View live

A new user attribute for Work Folders server Url

$
0
0

Overview

To continue the blog post series on Work Folders, I’d like to talk about how to manage a new attribute storing the Sync Server Url as part of the user object in Active Directory Domain Services (AD DS). If you have seen the demo of the client setup in this video [00:08:20], it shows that a user can simply use their email address to configure Work Folders. The magic here is to use the user email address to construct a Url that a Work Folders client can query from the sync server.

For example, if the user email is Joe@contoso.com, the Work Folders client will build the Url as https://workfolders.contoso.com, and use that as the Url to establish the communication with the Work Folders server. The server then queries AD DS for the new user attribute to figure out the server location of the user. This process is called auto-discovery.

This makes it so that the only data users need to know to set up Work Folders is their email address. Admins can simply configure the user attribute. When a user needs to be moved to another server, the admin can just update the user attribute, and remove user access to the old server. At this point, the user is automatically redirected to the new server. Auto-discovery can make client set up really simple when multiple sync servers are deployed.

Although this attribute makes discovery really simple, it is not a must-have attribute. There are a couple of cases you won’t need to configure this attribute or extend the schema to use it.

  • If you have a single server deployment, the user will get a valid sync share when querying the sync server.
  • If you want to manage the users per sync server. I.e. send a server Url to the user, so the user will connect to the server directly without going through auto-discovery.

I’m not going to get too deep with the discovery process in this post – instead I’ll focus on the management of this new attribute: msDS-SyncServerUrl.

Schema extension

The new attribute is included in AD DS with Windows Server 2012 R2, but you don’t need to upgrade the domain controller to get this. If you are running a previous released version of Windows Server, you can simply run ADprep.exe on any Windows Server 2012 R2 member servers to extend this attribute in the AD schema. For more information about adprep, take a look at this http://technet.microsoft.com/en-us/library/dd464018(v=WS.10).aspx

Delegation

By default, only members of the Domain Admins or the Enterprise Admins group can modify the user attribute. For Work Folders management, the file server admins should have permissions to modify the user attribute.

Let’s take a look at how you could accomplish this.

Using UI

1. In Active Directory Administration Center, create a file server admin security group (for example, fsadmin).

2. Launch the Delegation of Control Wizard by right-clicking the domain root node.

clip_image002

3. Add FSadmins to the wizard:

clip_image004

4. Select the option to create a custom task to delegate:

clip_image006

5. Select the option to delegate control to only user objects:

clip_image008

6. Specify the msDS- SyncServerUrl as the property to be managed by FSAdmins:

clip_image010

7. Complete the wizard.

Using cmdline

DsAcls.exe is a tool you can use to modify the permissions on the user object for a given attribute. To do the same delegation, you can run the following cmd:

DsAcls dc=contoso,dc=com /I:S /G "Contoso\FSAdmin:RPWP;msDS-SyncServerUrl;user"

In the cmd line above, it enables Contoso\FSAdmin group to Read and Write on the user attribute “msDS-SyncServerUrl”. You can change the FSAdmin to the security group name you have created to manage this user attribute.

A few notes

1. Since the ACL will be applied to existing user objects, the operation may take a while if there is a large number of user objects present in the domain.

2. If you have multiple domains in the forest, you need to repeat the steps to delegate the permission for each domain.

3. After the delegation, all members in FSAdmin will be able to modify the mSDS-SyncServerUrl attribute of the (new or existing) user object.

Modify user attribute

Now that the FSAdmins have permissions to modify the msDS-SyncServerUrl, they can change the value of this attribute using a number of ways:

Using the UI

1. Open the ADSI Edit from the Server Manager -> Tools menu.

2. Connect to the Default naming context by right-clicking the ADSI Edit node, and then selecting Connect to…

clip_image012

3. Select the user, right-click the user object, and then click Properties:

clip_image014

4. Navigate to the msDS-SyncServerUrl property, and click Edit:

clip_image016

5. Enter the Url value for this user, click Add, and then OK.

Using Windows PowerShell

You can use Set-ADObject cmdlet to set the user property:

Get-AdUser <username> | Set-ADObject –Replace @{"msDS-SyncServerUrl" = "<Url String>"}

For example, I can use the following cmdlet to set the attribute for user “Sally”:

Get-ADUser Sally | Set-ADObject –Replace @{"msDS-SyncServerUrl"="https://sync1.contoso.com"}

You can also verify the setting by running the following cmdlet:

Get-ADUser Sally | Get-ADObject –Properties "msDS-SyncServerUrl"

Using an Ldf file

If you are familiar with using ldf files to manage AD objects, you can also create the ldf file to do the same as above:

dn: CN=Sally,CN=Users,DC=Contoso,DC=COM

changetype: modify

add: msDS-SyncServerUrl

msDS-SyncServerUrl: https://sync1.contoso.com

-

Troubleshooting

After the Work Folders server authenticates the user, the server will query the user attribute using “local system” account. If the server fails to query AD DS, it will log an event on the server. Below are some troubleshooting tips:

1. Check the network: see if the server has network access to a domain controller (DC).

2. Check the status of AD DS: make sure the AD DS is healthy and that a DC is online.

3. Check whether the file server has read permissions to the attribute.

Checking for server permission

By default, file servers have read permissions to the user attributes in the AD DS. In some deployments, however, IT admins may have explicitly blocked it, and the query will fail. The steps below will show you how to verify whether the server has read permission to this user attribute:

1. Open ADSIEdit

2. Right click on “ADSI Edit”, and select “Connect to…

3. Select “Schema” naming context:

clip_image018

4. Find the ms-DS-SyncServerUrl attribute, and open the properties page:

clip_image020

5. Go to the “Security” page, and click on “Advanced

clip_image022

6. Go to the “Effective Access” page, and click on “Select a user”, make sure Computers is checked below:

clip_image024

7. Enter the sync server name in the object picker, then click on “View effective access” button:

8. Make sure the sync server’s machine account can read the user properties:

clip_image026

On clusters, check the machine account for the cluster nodes - not the cluster VCO, as the access is done using the local system account of the physical machines.

Related links

The following are related resources about Work Folders:

Work Folders Overview

Introduction of Work Folders on Windows Server 2012 R2

Work Folders Test lab deployment

Certificate management for Work Folders


Windows Server 2012 R2 – Resolving Port Conflict with IIS Websites and Work Folders

$
0
0

Hi All, my name is Bill and I work in the Windows Server - File Services group.  One of my main responsibilities is to enhance the Server Manager user interface.  We have just delivered Work Folders as part of the Windows Server 2012 R2 release.  I have been following the forum and there have been several questions about Work Folders and port conflicts with other IIS Websites.  For this reason I posted this blog for guidance.

Covered in this Article:

  • Diagnosing Port Conflict between Work Folders and Windows Server Essentials or other web applications.
  • Changing Port Configuration in Work Folders
  • In-Place Upgrade from Windows Server Essentials to Windows Server Standard
  • Guidance for Using Hyper-V for Current Enabling of Work Folders and Windows Server Essentials using their default configuration

Sections:

  1. PROBLEM STATEMENT
  2. OVERVIEW
  3. DIAGNOSING WORK FOLDERS AND WINDOWS ESSENTIALS PORT CONFLICTS
  4. CHANGING WORK FOLDERS CONFIGURATION
  5. NO CONFIGURATION FOR BOTH FEATURES
  6. SUMMARY 

PROBLEM STATEMENT

Using any web application with Work Folders may create port conflicts between the web application and Work Folders.  Work Folders uses by default ports HTTPS=443 and HTTP=80.  Most web applications use the same well known ports.  In the specific case of Windows Server Essentials and Work Folders, both features use the same default ports.  The first feature to initialize the ports will exclusively own them.  This creates a port conflict for one of the features, depending on startup and how the features where configured. 

OVERVIEW

Work Folders is available in Windows Server 2012 R2 Essentials as part of the File and Storage Services role. Work Folders uses the IIS Hostable Web Core feature and all management is performed via the Work Folders canvas in Server Manager as well as via Windows PowerShell cmdlets.  Windows Server Essentials is managed via its dashboard and the IIS Management UX.  Both products assume exclusive access of the SSL port (443) and HTTP port (80).  This is the default configuration for both products.

The administrator has the ability to change both feature configurations when both products are enabled. Changing the port conflicts allows for both products to be installed on Windows Server 2012 R2 Essentials.  If the administrator does not want to change the default ports, they have the option of enabling either Windows Server Essentials feature or Work Folders.  This is at their discretion based on business need.

 If the administrator would like to change the ports on either feature, they need to open the firewall on the server for the specific ports they defined for the feature.  This can be accomplished by navigating to Control Panel and modifying the Windows Firewall configuration.  Further work is necessary in collaboration with a network administrator to configure the routers as well.  This document will not cover network configuration.

See: http://msdn.microsoft.com/en-us/library/bb909657(v=vs.90).aspx 

DIAGNOSING WORK FOLDERS AND WINDOWS SERVER ESSENTIALS PORT CONFLICTS

In the event where both features are enabled on the same server with default port configuration the behavior may be subtle and only one feature will work.  In the case of Windows Server 2012 R2 Essentials, Windows Server Essentials is enabled out of the box.  This means the ports will have been configured and ownership will be IIS.  When you enable Work Folders, the installation will succeed and Server Manager may not be able to manage the Work Folders feature on the Windows Server Essentials server.  If the administrator navigates to the SERVICES primary tile they will see the following:

 


 
 
The Sync Share Service will not start if both ports defined in its configuration are being used by another process.  This will be a clear indication the default ports are not available to Work Folders.  If on the off chance one of the ports is available the Sync Share Service will become operational.   There will be no indication there is an error.

Please note if port 443 is used by another process, although Work Folders Service will start and be operational, any SSL traffic will not be directed to Work Folders.  SSL=443 is the default secure port used by Work Folders.  The administrator would have to look at the port definition in the file c:\windows\system32\SyncShareSvc.config and compare the configuration of websites defined in the IIS UX.  Once they check the port information in IIS they can assess the conflict. 

Using Event Viewer to view SyncShareSvc errors

In the case both ports are not available the following error can be found in the system event log.

Using Event Viewer (eventvwr.msc) navigate to the Windows Logs, System Channel.  The error should be from the Service Control Manager.  The error returned will be in the system channel in the form:   “The Sync Share Service terminated with the following service-specific error:  Cannot create a file when a file already exists” This is the generic message when both ports are not available.

  

Using IIS PowerShell cmdlets “Get-WebBinding” to list port bindings

Get-WebBinding is a handy command for showing IIS website port bindings on your server.  In this particular case we want to see all the IIS website bindings active on your server.

>get-WebBinding     ß command on left will give you the following output:

Example 1 - both ports in use by IIS website:

The Work Folders SyncShareSvc will not start because both default ports are being used by IIS.

 

Example 2 – one port used by IIS website – SSL PORT:

As mentioned in the previous section, if Work Folders has access to one port the service SyncShareSvc will come up.  Work Folders uses port 443 as the default.  In example 2 Work Folders service would start and look  operational.  The output of Get-WebBinding would show the administrator Work Folders would not function as defined in the default configuration.

If neither port is in use by another web application, the list above would be empty. 

CHANGING WORK FOLDERS CONFIGURATION

On the Server Manager Service Primary Tile locate the SERVICES tile.  Locate the SyncShareSvc.  Verify it is stopped.  If it is not stopped, select the SyncShareSvc and stop it.

Navigate to the directory on the server where work folders feature is enabled.

>cd c:\windows\system32

Edit the file with your favorite editor (file name = SyncShareSvc.config)

Locate the section below and make the changes to your port designation

 

For this
example you want to change SSL Port from 443 to 12345.  Change the port number and close the file.   Because the sync service does not run under the system designation it does not have the privileges to access different ports other than the default. It runs under LOCAL SERVICE.  Because of this designation the administrator has to run another command.   In an elevated command window type the following command:

Netsh http add urlacl url=https://*:12345/ user="NT Authority\LOCAL SERVICE"

 

Navigate to SERVICES tile in Server Manager and start the service SyncShareSvc.

Since the Work Folders configuration on the client defaults to either HTTPS=443 or HTTP=80 there is additional configuration to override the default ports.  The administrator will need to change the URL for connecting to the Windows Server hosting the clients sync share.  Normally all that would be necessary is the URL of the server.  Since the port has changed there is an additional parameter in the URL which is – colon port number “:#”.  This  number matches the configuration in the configuration file on the server SyncShareSvc.config.   See example of the PC client configuration below:

 

  

NOTE: When the administrator changes the default ports for Work Folders they cannot use the auto discovery process.  They can communicate the new URL using Group Policy or a standard email communication with the URL and new port definition.

 

IIS References for Configuration Changes

For Windows Server Essentials port configuration see the Windows Server Essentials documentation using the IIS management UX.

http://www.iis.net/configreference/system.applicationhost/sites/site/bindings/binding

  

NO CONFIGURATION CHANGES FOR BOTH FEATURES

The administrator has another option for running both Windows Server Essentials and Work Folders on the same server.  There are posts on-line which already recommend an in-place license upgrade from Windows Server Essentials to Windows Server Standard.  This has a twofold improvement.  It allows for greater usage of Windows Server Essentials and has a license for two Hyper-V machines.  The administrator would then disable Windows Server Essentials in the main host and user the two Hyper-V machines one for each feature.  Windows Server Essentials in one VM and Work Folders in the other. They can both use their default configurations and work concurrently on the single host.

You can upgrade in place from Windows Server 2012 R2 Essentials to Windows Server Standard.  --- Windows Server Standard is the only in-place upgrade.  You cannot use the command below to upgrade to Windows Server Storage, Windows Server Datacenter etc. The command for upgrading from Windows Server 2012 R2 Essentials to Windows Server 2012 R2 Standard is:

 dism /online /set-edition:ServerStandard /accepteula /productkey:<Product Key>

From <windows2012 essentials upgrade to windows 2012 server standarddataenterprise

SUMMARY

There are several ways to configure Work Folders in an environment which already has established web applications. You have the ability to change the ports of either application.  In the case of an IIS application you can use the existing IIS UX.  In the case of WorkFolders you can follow this guide. The administrator also has the ability to run Work Folders in a separate VM which has the benefit of leaving their current configuration as is and installed Work Folders with default settings.

 

Monitoring Windows Server 2012 R2 Work Folders Deployments.

$
0
0

Overview

Work Folders is a new functionality introduced in Windows Server 2012 R2. It enables Information workers to sync their work files between their devices. This functionality is powered by the Work Folders service that can be enabled as a Windows Server 2012 R2 File Services Role service.
Work Folders relies on a set of core infrastructure services such as storage, file systems, networks and enterprise infrastructure solutions such as Active Directory Domain Services (AD DS), Active Directory Federation services, Web Application Proxy and more. Any problems with those infrastructure components or a misconfiguration of Work Folders might lead eventually to a degraded state or a complete unavailability of the Work Folders service. Not having the Work Folders service running properly might impact the users’ capabilities to sync files between their different devices.
Just like other enterprise solutions, the Work Folders service comes with a set of logging and monitoring tools that allows IT Pros to identify, understand and solve issues or errors that the system is facing.
This blog post will cover several monitoring and logging solutions that facilitate early identification of Work Folders service issues and also help understand the root cause that instigated the problem.

 

Monitoring Work Folders Operational health with Server Manager and Event Viewer

Server Manager

Server Manager in most cases will be the best starting point to understand the health status of the Work Folders service. The File and Storage Services tiles display server level information such as running services status and related events. There is also a specific canvas for Work Folders which provides sync share and Work Folders-specific information.

To see the Work Folders service status and related events, open Server Manager and navigate to the servers’ canvas through “Files and Storage Services” -> “Servers”. The tiles in this canvas (as shown in image 1) display services health and events related to Work Folders and other File and Storage Services.
The Work Folders service name is “Windows Sync Share” and any events related to Work Folders will show as a “Microsoft-Windows-SyncShare” Source (Work Folders events will be explained later in this document)

 


Image 1 - In this example, we can see that the Sync Share service is running properly, but there was a Work Folders error trying to access the file system as shown in the events tile. (This specific issue can be caused by lack of access permissions or physical disk access issues)

 

To view specific sync shares information, in Server Manager, go to “Files and Storage Services” -> “Work Folders”. This Work Folders canvas displays related information such as the file system location of the sync share and the users that are mapped to this sync share. It also provides important information about the volumes and file systems that the sync share resides on. This canvas is a good view to spot storage and file system related issues such as low disk space that might impact the Work Folders directories

 

 
Image 2 – The Work Folders canvas shows sync shares information. In this example, the “HRWorkFolders” which resides on the G:\hrworkfolders share is selected. Once selected, the other tiles on this canvas show additional information for the selected sync share. This includes the list of users that are mapped to that sync share (managed by security groups), the volume information for the sync share, and the quota settings.

 

Selecting sync shares  from the master tile above is expandable to multi sync shares (by holding the CTRL key and selecting more sync shares, or using the CTRL-A key combination to select all shares in the sync shares tile). When multiple sync shares are selected, the related tiles will also transform to a multiple objects view as shown in image 3 below. This multi objects view is useful to get a broader view of the sync shares, the amount of remaining space on their respective volume and any quota thresholds that might be met.

 

 
Image 3 – The primary sync share tile allows multiple sync shares selection. Related tiles react accordingly by showing multiple rows of volume and quota information as well. We can see in this example that the “HRWorkFolders” quota is low on free space and should be extended.

 

 

The tiles described above are useful in displaying the system’s status, but if a lot of volumes and drives are used for Work Folders, information rows that display low quota or low disk space might not stand out. One way to easily spot volumes or quotas which are almost full is to sort the free space and capacity columns to list the ones with the least amount of remaining space up on top (by clicking on the column title). Another way is to use the tiles built in filter boxes. Image 4 below shows the ability to only show Sync Shares hosting volumes which have less than 10GB of available space. Those filters can also be saved for future usage.

 
Image 4 – Volumes tile on the Work Folders canvas set with a filter to only show volumes with less than 10GB of free space.

 

It is also possible from the Work Folders canvas, to drill down even further and get status information for a specific user across his different Work Folders devices. This can be done by selecting the appropriate user from the users’ tile, and selecting the properties context menu item (as seen in image 5 and 6 below). This view provides more information on the users’ devices and can be used to identify specific users’ devices issues. 


Image 5 – Work Folders user context menu

 

The Properties dialog will present information about the users Work Folder location, the devices that run Work Folders, their last sync date and more.

 


Image 6 – Work Folders status for Sally. This dialog displaying sync information of Sally’s different devices.

 

 

Event Viewer

The Work Folders service writes operational information, warning and error events to the Microsoft-Windows-SyncShare/Operational channel. This channel contains informational level events such as creation of a user sync share folder and warnings about the system health. It also logs errors that describe critical issues that needs to be addressed, such as the service not being able to access the file system.

There is also a Microsoft-Windows-SyncShare/Reporting channel that logs successful user sync actions. In this reporting channel, each logged event represents a successful sync action by a device, the size of the sync set, the number of files in the sync set and the device information such as OS version and type. These events can be used to understand the overall health of the system and collected for understanding Work Folders usage trends.

Listing and collecting the Reporting logs be done either through System Reports in Operations Manager , or as an alternative, by running PowerShell scripts that collect the data and export it to a CSV which can then be analyzed in Microsoft Excel. (See an example down below in the PowerShell section)


There are 2 main tools that can be used to read these events.

In Server Manager, by going to the “Files and Storage Services” -> “Servers” and browsing the events tiles (see image 7). Note that this tile displays Work Folders related events and other File and Storage Services events. This tile lists only the operation channel events (reporting channel events are not shown).

 
Image 7 – Work Folders Events are shown in the Files and Storage Services/Servers  canvas

Another way to view the logs is by using the Event Viewer. Event viewer can be opened from different locations, either by typing “eventvwr” in a command or PowerShell console or by using the Tools menu in Server Manager (showing on the upper right corner).
Once Event Viewer is opened, use the tree on the left pane to navigate to “Windows logs” -> “Microsoft” -> “Windows” -> Sync Share (see image 8). Underneath the SyncShare node, you’ll find the operational and reporting channels. Clicking on each one of them will bring up the list of events (see image 9)

 


Image 8 – Work Folders Sync Share events location in event viewer

 

 
Image 9 – Work Folders user events showing in the Event Viewer pane

 

Monitoring Work Folders with PowerShell

The Work Folders Service on Windows Server 2012 R2 comes with a supporting PowerShell module and cmdlets. (For the full list of Work Folders Cmdlets run gcm –m SyncShare in a Powershell console).

Just like in the examples shown above, where Server Manager was used to monitor and extract the information, the Work Folders cmdlets provide a way to retrieve Work Folders sync shares and users information. This can be either used by administrators for interactive monitoring session or for automation within PowerShell scripts.

Here are a few Powershell examples that provides Work Folders sync shares and users status information.

Get-SyncShare  -The Get-SyncShare cmdlet provides information on sync shares. This includes the file system location, the list of security groups and more.


From these objects, Staging folder and Path can be extracted and checked for availability and overall health.

 

 

Get-SyncUserStatus - similar to the users’ property window described above in the server manager section, this cmdlet provides Work Folders users’ information. This includes the user name, the devices that the users are using, last successful connections and more.  Running this cmdlet requires providing the specific user name and sync share.


Here is an example for listing the devices and status that Sally is using with Work Folders:


 In the results shown above, useful user information is shown about the user’s devices, their OS configuration and last successful sync time.

 

Get-Service - The Sync Share service (named SyncShareSVC ) status can be read by using PowerShell’s generic get-service command


 In the above example we can see that the service is in “Running” state. “Stopped” means that the service is not running.

Events – Powershell also provides an easy way of listing Work Folders events, either the operational or the reporting channels. Here are a few examples:

1) Listing Errors from the operational channel (in this example, the issues are reported on a system where one of the disks hosting the Work Folders directory was intentionally yanked out)

2) List successful events from the Work Folders Reporting channel

 

Other Work Folders Monitoring Tools and Solutions

While this post focuses on Work Folders Server Manager tiles and Powershell cmdlets, there are more useful tools that can be used to monitor a Work Folders deployment.

Work Folders Best Practice Analyzer

Windows Server 2012 R2 comes with a built in set of Work Folders BPA rules. Though BPA rules intent is to alert on configuration issues, they can be used to routinely monitor and identify issues that might impact the Work Folders service.

More details on Work Folders BPA rules can be found here

Work Folders System Center Operations Manager File Services Management Pack.

A new File and Storage Services management pack for windows server 2012 R2 should come out shortly after windows server 2012 R2 general availability. This pack will also include Work Folders service monitoring capabilities that can be used with a System Center Operations Manager.

More information on System Center Operations Manager is available here.

Performance monitoring

Work folders didn’t introduce any new performance monitors, however, since the Work Folders service is hosted by a web service, setting performance monitoring on the web service instances can provide valuable information on the clients Work Folders data transfer, queues and more. Furthermore, performance monitors can be also set on Network, CPU and other valuable system components that are essential for the Work Folders Service.

More information on Performance monitors can be found here


Work Folders Supporting Systems monitoring (AD, ADFS, Web Application Proxy and SSL Certificates)

As mentioned above, Work Folders rely on a set of enterprise solutions to work properly. These include, but not limited to, Active directory, Active Directory Federation Service, Web Applications Proxy, Certificate expiration dates and more. Any impact on any one of these services might impact the Work Folders service. To sustain a long running Work Folders service, it is also recommend that any one of the supporting components will also be monitored.

More information on certificate management and monitoring certificate expirations can be found here.

 

Other Work Folders Related Links

 

 
 

Performance Considerations for Work Folders Deployments

$
0
0

Hi all,

Here is another great Work Folders blog post that shares information about work folders performance in large scale deployments. The content below was compiled and written by Sundar Srinivasan who is one of the software engineers in the Work Folders product team.


Overview

One of the exciting new features introduced in Windows Server 2012 R2 is Work Folders. Please refer to this Technet article and this blog post for broader details about the Work Folders feature and its deployment. During the development of this feature, I worked on testing the performance of Work Folders on a typical enterprise-scale deployment. In this blog post, we are going to look at the performance and scalability aspects of Work Folders.

There are three scenarios of sync that we modeled for this experiment. Once a user configures Work Folders on her device, the user tends to move all her work-related files to the Work Folders triggering a sync that populates the data in the Sync Share set up on her organization’s Windows 2012 R2 file server with Work Folders feature enabled. This scenario is termed as “first time sync”.

The second scenario is the user adding new devices. As the user configures Work Folders on her personal devices like laptops, Surface Pro or Surface RT, a sync will be triggered from each of these devices to sync the work-related files to her devices.

Beyond this point, the user changes their Work Folders data like editing their Word documents and creating new PowerPoint files on a daily basis, on one or more devices. Any such change will trigger what we call an “ongoing sync” to the server. For the purpose of measuring the scalability of the file server enabled for synchronizing Work Folders data, which we will refer to as the sync server, we are very much interested in studying how many concurrent ongoing sync session the server can handle without affecting the experience of an individual user.

First section of this blog explains the topology that we used for simulating 5,000 users and a heuristic model that explains how we derive the concurrent sync sessions the sync server serves on average at any given time. Then we will look at the results of our experiments and what resources become bottleneck. This blog will also give pointers to some Windows Performance Counters that can help analyzing the performance of the sync server in production and give some guidance on configurations.

Hardware Topology

We used a mid-level 2-node failover cluster server. The storage for the server is through external JBODs of SAS drives connected through an LSI controller. Our definition of the server hardware looks like this

 

We set up the machine with the data of 5000 users on the sync server distributed across 10 Sync Shares. To simulate multiple devices with Work Folders configured, we use a small set of entry-level servers which will simulate multiple sync sessions from each of them, as if they originate from multiple devices. The server has a 10Gbps connection to the hub, while all the clients have 1Gbps connections.

The hardware topology that we used in this test is as shown in the below diagram:

 

The list of clients on the left-hand side of the diagram represents the entry-level servers which simulate multiple sync session from each of them, as if they originate from multiple devices. We would like to measure the experience of the users, when the server and the infrastructure is loaded with multiple sync sessions. So we use two desktop-class devices and we measure the time taken for the changes on device 1 to reach device 2. The time taken for the data to be synced to the second device should not be fairly different compared to the time taken when the server is free.

Dataset that we used for each user is about 2 GB each in size with 2500 files and 500 folders. Based on our background knowledge on Offline Files feature, the distribution of file sizes on a typical dataset look like this:

 

Although 99% of the sync users fall under this category, we also included some users to test our dataset scale limits. About 1% of the users have a 100GB dataset, with some users having 250,000 files and 500,000 folders and 10,000 files in a single folder, and with some users with files being as large as 10GB.

We tested with 5000 users in this setup across ten Sync Shares. In order to test the scale limit on number of users supported per Sync Share, we created a single Sync Share with 1000 users and distributed the other 4000 users across nine Sync Shares: five Sync Shares with 400 users each and four Sync Shares with 500 users each.

Modeling the sync sessions on server

We have developed a heuristic model to calculate the number of simultaneous sync sessions on the sync server, by looking at the distribution of user profiles. In Work Folders, any file or folder change made locally will trigger a sync. But if there is no change made on a particular device with Work Folders configured, the device still polls the server once in 10 minutes to determine if there is any new file or folder or new version of existing file available on the server for download. So we just need to model the user activity during this 10 minutes polling interval.

In this model, we assume that each user has 3 devices with Work Folders configured - out of them on an average 1.5 devices/user will be active and we assume that on an average 20% of users are offline. The other users are classified based on their usage of Work Folders into 5 profiles - from inactive users whose devices are online but passively receiving changes to hyperactive users who make bulk file copies into the Work Folders:

 

Our model is based on the educated assumptions made in the distribution of the users with our background knowledge of the user data synchronization scenario. We also derive the number of simultaneous sessions on the server based on our empirical knowledge about the duration of a sync session and polling session on the server. We applied this model to experiment if the mid-level server that we described above can support 5000 users without noticeably degrading the sync user-experience of individual users. The results of our experiment is described in the section below.

Results & Analysis

In this section, we will discuss what the results look like and compare it with the results when the server is inactive. We will also show how to identify network bottleneck during synchronization.

To measure the sync time, our reference user falls in the class of a very active user who has made 10 independent file changes and two new files created in a new folder. Our test results of sync time of the reference user is as shown below:

 

The chart above shows the time taken for the local changes to be upload to the server and the remote changes to be downloaded to a different device. To put this in the context of user experience, a common concern in any sync system is a simple question posed by the user: “Will my files be available in all my devices immediately?” From Work Folders point of view, the answer to this question depends on several factors like frequency of synchronization, amount of data transferred to effect that sync, and the speed of the network connection from the device to the server. So the end-user experience of Work Folders is defined in terms of the perceived delay after which a user can expect the data to be available across multiple devices As the number of concurrent sync sessions to the server increases, we expect that the user will not experience a significant delay in changes from one device synchronizing to her other devices.

The following graph shows the time in seconds the reference user has to wait for their changes (10 independent file changes and two new files created in a new folder) to be available across multiple devices.

 

As we notice from the graph above, there is a less than 5% impact on the user experience.

However the previous chart shows that the upload time and download time increases from 2 seconds to 12 seconds, when the sync server is loaded with several sync and polling requests. We proceed forward in studying which resources are becoming a bottleneck on the server to cause this increase in the sync time from 2 seconds to 12 seconds.

The following chart gives the CPU utilization on the server during this process.

 

The CPU utilization remains within 36% with a median value of 28%, so we do not think CPU was becoming a bottleneck at this stage. Memory used by the process was 12GB, but it was primarily due to the database residing in memory for faster access. The server has a 64GB RAM and so it was not becoming a bottleneck. In other testing, we saw that if the operating system was indeed under memory pressure, the memory used by the database for caching would decrease.

The utilization of other network and disks are shown in the table below.

 

From the table above, it is evident that the network is becoming a bottleneck as the number of concurrent sync sessions and polling sessions increase. As already explained in the hardware topology, the client machines have a 1Gbps NIC, whereas the sync server has a 10Gbps NIC hooked up to a 10Gbps port of the same network hub. Since the network usage increases, the clients experience delay in communication. However we noticed that none of the HTTP packets were rejected.

From this experiment, we found that the performance of sync operation on high-scale is IO-bound. It is possible to achieve such a high performance even on an entry-level server with 4GB memory and quad-core single socket CPU, as long as the network has a higher bandwidth and storage sub-system has a higher throughput.

Runtime Performance Monitoring and Best Practices

Although Work Folders does not introduce any new performance counters, existing performance counters of IIS, network and disk in Windows Server 2012 R2 can be used to understand which resource is becoming overused or bottlenecked. We have listed some important performance counters in the table below

 

Work Folders on server is highly network and IO intensive. On sync server, Work Folders runs as a separate request queue named SyncSharePool. The table above contains the counters specific to the SyncSharePool that would be useful. If the rejection rate goes above 0% or if the number of rejected requests shoots up, then the network is clearly getting bottlenecked. Apart from these counters, there are other generic IIS counters and network counters that can give information about the network utilization.

The number of \HTTP Service Url Groups(*)\CurrentConnections gives the existing connections to the server that includes the combined count of both sync sessions and polling sessions. If the network utilization reaches 100%, we will notice the output queue length increasing as the packets get queued up. If the packets dropped greater than zero indicates that the network is getting choked up.

In general, if there are multiple Sync Shares in a single server, it is advisable to configure the Sync Shares in different volumes in different virtual disks if possible. Configuring Sync Shares in different volumes will translate the incoming requests to different Sync Shares as file IOs into multiple virtual disks. We have also listed some counters specific to logical disks that can be used to determine if the incoming requests are targeted towards them. Any time, if the average disk queue length goes above 10, the disk IO latency is going to increase.

Conclusion

When deploying Work Folders across the enterprise, it is a good idea to plan the rollout in phases so that you can limit the number of new users trying to enroll and upload their data all at once causing heavy network traffic. This will ensure that the users will not try to enroll and upload their data all at once causing heavy network traffic.

During this test, we wanted to explore if the mid-level server can support 5000 users. Based on the heuristic model, we classified the users under different profiles and simulated sync sessions accordingly. We monitored the server to collect at the vital statistics to ensure that its continuous stability. We measured the sync time from desktop-class hardware to understand the user-experience when the server is supporting 5000 users. With this experiment, we found out that the mid-level server with exact or similar configuration as mentioned above should be able to support 5000 users without affecting the user-experience. We found out that the network utilization on the server averaged at 60% and the sync slowed down by about 10 seconds due to the load. The total time taken for typical users’ change to appear across all their devices is not affected by more than 5%, even with a busy server with sync data of 5000 users.

Our study of performance counters shows that the performance of sync operation on high-scale is IO-bound. It is possible to achieve such a high performance even on an entry-level server with 4GB memory and quad-core single socket CPU, as long as the network has a higher bandwidth and storage sub-system has a higher IO throughput.

 

We hope you find this information helpful when you plan and deploy Work Folders in your organization.

Sundar Srinivasan

Work Folders on Clusters

$
0
0

In the previous blog post, I provided a step by step guide to setup Work Folders in a lab environment using the Preview release of Windows Server 2012 R2. With the GA release, Work Folders leverages Windows Failover clustering to increase availability. For general details about failover clusters, see Failover Clustering.

In this blog post, I’ll show you how to configure the Work Folders on failover clusters using the latest Windows Server 2012 R2 release.

Overview

Work Folders is a new technology in the File and Storage Services role. Work Folders enables users to sync their work data across their devices using a backend file server so that their data is available to them wherever they are. In the Windows Server 2012 R2 release, Work Folders is supported on traditional failover file server clusters.

Pre-requisite

In this blog post, I’ll provide you with the step by step guide to set up Work Folders in a highly available failover configuration, and discuss the differences in managing a clustered Work Folders server vs. a standalone Work Folders server. This post assumes that you have a good understanding of Work Folders, and know how to configure Work Folders on a standalone server. If not, refer to the Work Folders Overview as well as Designing a Work Folders Implementation and Deploying Work Folders.

To follow this step by step guide, you will need to have the following computers ready:

  • Domain controller: a server with the Active directory Domain Services role enabled, and configured with a domain (for example: Contoso.org)
  • A two node server cluster, joined to the domain (Contoso.org). Each node runs Windows Server 2012 R2 build. (for example: HAWorkFolders)
  • One client computer running Windows 8.1 or Windows RT 8.1

The computers can be physical or VMs. This blog post will not cover the steps to build the two node clusters; to set up a failover cluster, see Create a Failover Cluster.

Express lane

This section provides you a checklist on how to setup sync shares on clusters, detailed procedures are covered in the later sections.

  1. Enable the Work Folders role on both cluster nodes
  2. Create a file server role in the cluster
  3. Configure certificates
  4. Create sync share
  5. Client setup

Enable Work Folders role service

The Work Folders role service needs to be enabled on all the cluster nodes. In this blog posts, as I have a two node cluster, I will enable roles on both nodes, by running the following Windows PowerShell cmdlets on each node:

PS C:\> Add-WindowsFeature FS-SyncShareService

To check if the computer has Work Folders enabled, run

PS C:\> Get-WindowsFeature FS-SyncShareService

Note: If Work Folders is installed, FS-SyncShareService will be marked with X.

Creating a clustered file server

Highly available services are created by using the High Availability Wizard in Failover Cluster Manager. Since Work Folders is part of the File and Storage Services Server role and you have already enabled Work Folders on each node, you can simply create the file server in the cluster, and configure sync shares on the highly available file server. The clustered file server will also be referred as a clustered file server instance, which is often called a cluster name account, virtual computer object, or (VCO).

Note: Work Folders isn’t supported on Scale-Out File Servers.

To create a clustered file server:

1. Open Failover Cluster Manager

2. Right click Roles, and then click Configure Roles…

clip_image001

3. Click File Server.

clip_image003

4. Choose the “File Server for general use” option:

clip_image005

5. Enter a name for the clustered file server instance (VCO). This will be the sync server name used by Work Folders client computers and devices during sync.

clip_image007

6. Select the disk on which users’ data will be stored.

clip_image009

7. Click on the confirmation to proceed on the role creation.

clip_image011

8. After you’re finished, the sync server appears in Failover Cluster Manager.

clip_image013

Configure certificates

For general certificate management, see this post. This section will focus on certificate configurations in a cluster using self-signed certificate. Note: the certificate must be configured on each of the nodes.

1. Create a self-signed certificate on one of the cluster nodes by using Windows PowerShell:

PS C:\> New-SelfSignedCertificate –DnsName “SyncServer.Contoso.org”,”WorkFolders.Contoso.org” –CertStoreLocation cert:\Localmachine\My

Note: the DNS name should be the name you used for the clustered file server instance (VCO) for the sync server. For multiple VCOs, the certificate must include all of the VCO names.

2. Export the certificate to a file

To export the certificate with password, you can either use certmgr.msc, or Windows PowerShell. I’ll show you the cmdlet version referenced in this page:

$cert = Get-ChildItem –Path Cert:\LocalMachine\My\<thumbprint>

$type = [System.Security.Cryptography.X509Certificates.X509ContentType]::pfx

$pass = read-host "Password:" -assecurestring

$bytes = $cert.export($type, $pass)

[System.IO.File]::WriteAllBytes("ServerCert.pfx", $bytes)

3. Copy, then import the certificate on the other node

$pass = read-host “Password:” -AsSecureString

Import-pfxCertificate –FilePath servercert.pfx –CertStoreLocation cert:\LocalMachine\My –Password $pass

4. Configure SSL binding on both cluster nodes

netsh http add sslcert ipport=0.0.0.0:443 certhash=<Cert thumbprint> appid={CE66697B-3AA0-49D1-BDBD-A25C8359FD5D} certstorename=MY

Note: you can get the certificate thumbprint by running:

PS C:\>Get-ChildItem –Path cert:\LocalMachine\My

5. Copy, then import the certificate on the client computer. Note: this step is only necessary for self-signed certificate, where client computers don’t trust the certificate by default. Skip this step if you are using trusted certificate.

$pass = read-host “Password:” -AsSecureString

Import-pfxCertificate –FilePath servercert.pfx –CertStoreLocation cert:\LocalMachine\Root –Password $pass

Create a sync share

In a cluster, the sync share is created on the node that owns the disk resource. In other words

1. You cannot create sync shares on disks that are not shared across all the nodes.

2. You cannot create sync shares on the non-owner node of the clustered file server instance (VCO).

For example, I want to setup a sync share on disk E, which is owned by SyncNode1, I will run the following cmdlet on SyncNode1:

New-SyncShare HAShare -path E:\hashare -user Contoso\fin

Note: For more information about creating sync shares, see Deploying Work Folders.

Client setup

  • Make sure that the client can trust the certificate used on the server. If you are using the self-signed certificate, you need to import it on the client. (see step 5 in Configure certificates)
  • When the client connects to the sync servers, it connects to the clustered file server instance (e.g. https://SyncServer.Contoso.org). The client should not connect directly to any physical nodes.
  • The Work Folders setup experience for users is the same as if Work Folders was hosted on a single server, as long as the administrator has set up Work Folders properly with a publicly accessible URL pointing to the VCO name.
  • Sync happens in the background, and there is no difference between standalone or clustered Work Folder servers for ongoing sync, although users can’t sync while the server is in the process of failing over - sync continues after the failover has completed.

Managing clustered sync servers

Most of the management experiences are the same between standalone and clustered sync servers. In this section, I’d like to list out a few differences:

  • All the nodes in the cluster need to have “Work Folders” enabled.
  • When there are more than one clustered sync servers hosted by a single cluster, you can manage sync shares with the “scope” parameter in the cmdlet. For example, Get-SyncShare –Scope <sync server VCO> will limit the sync shares configured on a specific clustered sync server (VCO). You need to run the cmdlet on the cluster node which is hosting the VCO.
  • Using Set-SyncServerSetting on a standalone server applies the settings on a single sync server; using the cmdlet on a clustered sync server applies the settings to all the clustered sync server (VCO) in the cluster.
  • When managing sync shares (such as new, set, remove), you must run the cmdlet or wizard on the node that owns the disk resources that are hosting the sync share data.
  • The SSL certificate is managed on each node in the cluster. The certificate must be installed and configured with SSL binding on each node; certificate renewal is also per node.
  • The certificate CN must contain the sync server (VCO) name, not the cluster node names.
  • When the client connects to the server, it connects using the sync server (VCO) name.

Conclusion

Work Folders supports failover clustering configuration in Windows Server 2012 R2. Most of the management experience is similar to the standalone servers, and the management should be scoped to the clustered sync server (VCO). Certificate management however is per-node.

I hope this blog post helps you to understand how to set up a highly available Work Folders server and provides you with insight into the differences in management between a standalone and highly available configuration. If you have questions not covered here, please raise it in the comments so that I can address it with upcoming postings.

Using DFS Replication Clone Feature to prepare 100TB of data in 3 days (A Test Perspective)

$
0
0

I’m Tsan Zheng, a Senior Test Lead on the DFS team. If you’ve used DFSR (DFS Replication), you’re probably aware that the largest amount of data that we had tested replication with until recently was 10 TB. A few years ago, that was a lot of data, but now, not so much.

In this post, I’m going to talk about how we verified preparing 100 TB of data for replication in 3 days. With Windows Server 2012 R2, we introduced the ability to export a clone of the DFSR database, which dramatically reduces the amount of time used to get preseeded data ready for replication. Now, it only takes roughly 3 days to get 100 TB of data ready for replication with Windows Server 2012 R2. On Windows Server 2012, we think this would’ve taken more than 300 days based on our testing of 100 GB of data, which took 8 hours to prep on Window Server 2012 (we decided not to wait around for 300 days). In this blog post, we’ll show you how we tested the replication of 100 TB of data on Windows Server 2012 R2.

First of all, let’s all look at what 100 TB of data could mean: It could be around 340,000 8 megapixel pictures (that’s 10 years of pictures if you take 100 pictures every day), or 3,400 Blu-Ray quality full-length movies, or billions of office documents, or 5,000 decent sized Exchange mailbox files, or 2,000 decent virtual machine files. That’s a lot of data even in the year of 2013. If you’re using 2 TB hard drives, you need at least 120 of them just to set up two servers to handle this amount of data. Now we have to clarify here that the absolute performance of cloning a DFSR dataset is largely dependent on the number of files and directories, not the actual size of the files (if we use verification level 0 or 1, which don’t involve verifying full file hashes).

In designing the test, we not only need to make sure we set up things correctly, but also we need to make sure that replication happens as expected after the initial preparation of the dataset - you don’t want data corruption when replication is being set up! Preparing the data for replication also must go fast if we’re going to prep a 100 TB of data in a reasonable amount of time.

Now let’s look at our test setup. As mentioned earlier, you need some storage. We deployed two virtual machines, each with 8GB RAM and data volumes using a Storage Spaces simple space (in a production environment you’d probably want to use a mirror space for resiliency). The data volumes were served by a single-node scale-out file server, which provided continuous availability. Hyper-V host (Fujitsu PRIMERGY CX250, 2.5Ghz, 6cores, 128GB RAM) and file server (HP Mach1 Server – 24GB, Xeon 2.27GHz - 8 Core) were connected using dual-10GbE network to ensure near local performance IO-wise. We used 120 drives (2TB each) in 2 Raid Inc JBODs for the file server.

In order to get several performance data points from a DFSR perspective (as DFSR uses one database per volume), we used following volume sizes that total 100 TB on both ends. We used a synthetic file generator to create ~92 TB of unique data; the remaining 8 TB was human-generated data harvested from internal file sets. It’s difficult to have that much real data...not counting VHDx files and peeking into personal archives, of course! We used the robocopy commands provided by DFSR cloning to pre-seed the second member.

Volume

Size

Number of files

Number of folders

Number of Replicated Folders

F

64 TB

68,296,288

2,686,455

1

G

18 TB

21,467,280

70,400

18

H

10 TB

14,510,974

39,122

10

I

7 TB

1,141,246

31,134

7

J

1 TB

1,877,651

7,448

1

TOTAL

100 TB

107,293,439

2,834,559

 

In a nutshell, following diagram shows the test topology used.

image

Now that storage and file sets are ready, let’s look at what verification we did during Export -> Pre-seed -> Import sequence.

  • No errors in the DFSR event log. (From Event Viewer)
  • No skipping or invalid records in DFSR debug log (By checking “[ERROR]”)
  • Replication works fine after cloning, by probing each replicated folder with canary files to check convergence.
  • No mismatched records after cloning, by checking DFSR debug log and DFSR event log.
  • Time taken for cloning was measured using Windows PowerShell cmdlet measure-command:
    • Measure-Command { Export-DfsrClone…}
    • Measure-Command { Import-DfsrClone…}

Following table and graphs summarize the results one of our testers Jialin Le took on a build that was very close to the RTM build of Windows Server 2012 R2. Given the nature of DFSR clone verification levels, it’s not recommended to use validation level 2 (which involves full file hash and is too time consuming for large dataset like this one!)

Note, the performance for level 0 and level 1 validation is largely dependent on count of files and directories rather than absolute file size, it explains why it takes proportionally more time for 64TB volume to export compared that of 18TB as the former has proportionally more folders.

image

Validation Level

Volume Size

Time used to Export (minutes)

Time used to Import(minutes)

0 – None

64 TB

394

2129

18 TB

111

1229

10 TB

73

368

7 TB

70

253

1 TB

11

17

Sum(100TB)

659 (0.4 days)

3996 (2.8 days)

1 – Basic

64 TB

1043

2701

18 TB

211

1840

10 TB

168

577

7 TB

203

442

1 TB

17

37

Sum(100TB)

1642 (1.1 days)

5597 (3.8 days)

From the chart above, you can see getting DFSR ready for replication for large dataset (totaling 100TB) is getting more practical!

I hope you have enjoyed learning more about how we test DFSR features here at Microsoft.

- Tsan Zheng

SMI-S Requirements are live on MSDN

$
0
0

Whew! It took a while but the SMI-S Requirements for Windows Server 2012 and SC VMM are live on MSDN. Just click here. These will carry forward to the next releases.

Introducing Work Folders on Windows Server 2012 R2

$
0
0

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Windows Server 2012 R2 Work Folders and how it applies to Brad’s larger topic of “People-centric IT.”  To read that post and see the other technologies discussed, read today’s post:  “Making Device Users Productive and Protecting Corporate Information.” 

Hello, my name is Nir Ben-Zvi and I work in the Windows Server team. I’m very excited to introduce to you Windows Server Work Folders, which is a new file server based sync solution in Windows Server 2012 R2 and Windows 8.1.

During Windows Server 2012 R2 planning, we noticed two converging trends around managing and protecting corporate data:

· Users: “I need to work from anywhere on my different devices”

· IT: “I’d like to empower my Information Workers (users) while reducing information leakage and keeping control of the corporate data that is sprawled across devices”

Work Folders enables IT administrators to provide Information Workers the ability to sync their work data on all their devices wherever they are while remaining in compliance with company policies. This is done by syncing user data from devices to on-premise file servers, which are now extended to include a new sync protocol.

Work Folders as Experienced by an Information Worker

To show how this works, here’s an example of how an information worker, Joe, might use Work Folders to separate his work data from his personal data while having the ability to work from any device: When Joe saves a document on his work computer in the Work Folders directory, the document is synced to an IT-controlled file server. When Joe returns home, he can pick up his Surface RT (where the document is already synced) and head to the beach. He can work on the document offline, and when he returns home the document is synced back with the file server and all of the changes are available to him the next day when he returns to the office.

Looks familiar? Indeed, this is how consumer storage services such as SkyDrive and business collaboration services such as SkyDrive Pro work. We kept the user interaction simple and familiar so that there is little user education required. The biggest difference from SkyDrive or SkyDrive Pro is that the centralized storage for Work Folders is an on-premise file server running Windows Server 2012 R2, but we’ll get to that a little later in this post.

Work Folders as Experienced by an IT Admin

IT administrators can use Work Folders to gain more control over corporate data and user devices and centralize user work data so that they can apply the appropriate processes and tools to keep their company in compliance. This can range from simply having a copy of the data if the user leaves the company to a wide range of capabilities such as backup, retention, classification and automated encryption.

For example, when a user authors a sensitive document in Work Folders on their work PC, it gets synced to the file server. The file server then can automatically classify the document based on content, if configured using File Server Resource Manager, and encrypt the document using Windows Rights Management Services before syncing the document back to all the user’s devices. This allows a seamless experience for the user while keeping the organization in compliance and preventing leakage of sensitive information.

For more details about Work Folders deployment see the following blog: Deploying Work Folders in your lab and Channel 9 video

Work Folders Capabilities

Work Folders is part of the People-Centric IT pillar in Windows Server 2012 R2. This pillar includes other important capabilities such as Workplace Join, Web Application Proxy and Device management. While these capabilities are integrated, they are also independent so that you can use them as standalone solutions to get immediate value as you deploy each capability.

Our main design focus around Work Folders was to keep it simple for the Information Workers while allowing IT administrators to use the familiar low cost, high scale Windows file server with all the rich functionality available on the backend from high availability to comprehensive data management.

Here is some of the functionality that Work Folders includes:

  • Provide a single point of access to work files on a user’s work and personal PCs and devices (Windows 8.1 and Windows RT 8.1, with immediate plans to follow up with Windows 7 and iPad support and other devices likely in the future)
  • Access work files while offline and sync with the central file server when the PC or device next has Internet or network connectivity
  • Maintain data encryption in transit as well as at rest on devices and allow corporate data wipe through device management services such as Windows Intune
  • Use existing file server management technologies such as file classification and folder quotas to manage user data
  • Specify security policies to instruct user PCs and devices to encrypt Work Folders and use a lock screen password, for example
  • Use Failover Clustering with Work Folders to provide high-availability solution

I should mention a few scoping decisions that we made in this release so that we could complete Work Folders in the short release cycle for Windows Server 2012 R2:

  • Backend storage is provided by on-premise file servers and Work Folders must be stored in local storage on the file server (e.g.: data can be on local shares on a Windows Server 2012 R2)
  • Users sync to their own folder on the file server - there is no support for syncing arbitrary file shares (e.g.: sync the sales demos share to my device)
  • Work Folders doesn’t provide collaboration functionality such as sharing sync files or folders with other users (we recommend using SkyDrive Pro if you need document collaboration features)

How Work Folders Compares to Other Microsoft Sync Technologies

Finally, I’d like to discuss how Work Folders fits in with other sync solutions that Microsoft provides, mainly SkyDrive and SkyDrive Pro. As described above, Work Folders provides a solution for customers that prefer to use traditional Windows file servers as the backend storage for the corporate data synced from user’s devices. This would work well for organizations that are already using a home folders or folder redirection solution or customers that already have established practice for managing and storing user data on file servers.

For customers that use SharePoint, SkyDrive Pro is a great solution that provides additional functionality ranging from rich collaboration features to a cloud service availability using Office 365

The table below shows the different options:

 

Consumer / personal data

Individual work data

Team / group work data

Personal devices

Access protocol

Data location

SkyDrive

X

  

X

HTTPS

Public cloud

SkyDrive Pro

 

X

X

X

HTTPS

SharePoint / Office 365

Work Folders

 

X

 

X

HTTPS

File server

Folder Redirection / Client-Side Caching

 

X

  SMB (only from on-prem or using VPN)

File server

More Information

For more details about Work Folders, you can view the following presentations that are available online:

· Work Folders Overview on TechNet

· Deploying Work Folders in your lab

· Work Folders overview in TechEd 2013

· Work Folders deep dive in TechEd 2013

 

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.


Work Folders Test Lab Deployment

$
0
0

Hi, everyone. I’m Jane Yan, a PM on the Work Folders team. I presented a session on Work Folders with Adam Skewgar, and demoed how it works both on the client and server (A Deep Dive into the New Windows Server Data Sync Solution). This blog post will show you step by step on how to build the demo environment shown in the sessions. Please note the guide is using the Preview release build, and the experience will differ slightly in the final RTM release.

Overview

Work Folders is a new feature introduced in Windows Server 2012 R2 that enables user access their work related files on the devices which has configured Work Folders, no matter whether the devices are joined to a domain or not, and whether the devices are connected directly to the corpnet or over the internet. Work Folders is available in Windows Server 2012 R2 Preview, and Windows 8.1 Preview. This step by step guide will use the Preview release for both server and the client.

Topology

The simplest setup for lab test of Work Folders requires the following computers or VMs:

  1. Active Directory Domain Services domain controller (DC)
  2. File server running Windows Server 2012 R2
  3. 2 client PCs running Windows 8.1 or Windows RT 8.1 (to observe documents sync between 2 devices)

In the lab testing, VMs are more convenient, I’ll provide the end to end setup using VMs . This test environment does not require you to publish any URLs for Work Folders.

Express lane

This section provides you a checklist on how to setup the lab environment, detailed procedures are covered in the later sections.

VM setup

This section assumes you have knowledge on setting up VMs, domain controller, and a virtual network. By end of this section, you will have a domain setup with the server and one client machine joining to the domain.

Configure Network

In the Hyper-V Manager console, create a Virtual Switch marked as Private.

Configure the VMs to use the Private network.

DC setup

  1. Create a VM using Windows Server 2012 R2
  2. Rename the VM to DC.
  3. Configure the IP of the server as 10.10.1.10
  4. After the VM setup, open Server Manager, and then add the following roles:
  • Active Directory Domain Services
  • DHCP Server (Note: this role is optional. You can also configure static IP for each VM without enabling DHCP)
  • DNS Server
  • Complete the wizard, then click on promote DC link “Promote this server to a domain controller”
  • clip_image002

    • Use the wizard to create a new forest as “Contoso.com”, and configure the DC appropriately.
    • Add a new scope in DHCP, such that other machines on the network can get IP address automatically. Make sure all the machines are on the same subnet, and pointing to 10.10.1.10 as the DNS server. Note: this is optional, you can also manually configure other machines with static IP.

    Server setup

    1. Create a VM using Windows Server 2012 R2.
    2. Rename the VM to SyncSvr.
    3. Join the SyncSvr machine to the domain Contoso.com
    4. Optionally, if you use static Ip, configure the Ip on this server as 10.10.1.12

    Client setup

    1. Create 2 VMs using Windows 8.1
    2. Rename VM1 to OfficePC
    3. Optionally, if you use static Ip, configure the Ip on this client as 10.10.1.15
    4. Rename VM2 to HomePC
    5. Optionally, if you use static Ip, configure the Ip on this client as 10.10.1.16
    6. Join OfficePC to the contoso.com domain.

    User and Security group creation

    Work Folders can be configured to domain users, you need to create a few test users in the AD. For testing purposes, let’s create 10 domain users (U1 to U10).

    We recommend controlling access to Work Folders through security groups. Let’s create[n1] one group named “Sales”, with scope “Global” and type “Security”, and add the 10 domain users (U1 to U10) in the Sales security group.

    Sync Server configuration

    Now the fun starts. For all the operations performed on the server, I’ll show the UI through Server Manager, and followed by the equivalent Windows PowerShell cmdlet.

    Enabling the Work Folders role

    Using Server Manager UI

      1. Launch the Server Manager on SyncSvr.
      2. On the dashboard, click “Add roles and features”.
      3. Follow the wizard, on the Server Role selection page, choose Work Folders under File and Storage Services:

    clip_image003

    1. Complete the wizard.

    Using PowerShell cmdlet

    PS C:\> Add-WindowsFeature FS-SyncShareService

    Create Sync Share

    Using Server Manager UI

    A sync share is the unit of management on the sync servers. A sync share maps to a local path where all the user folders will be hosted under, and a group of users who can access the sync share.

    Steps

    Screenshots

    Description

    Launch New Sync Share Wizard from Server Manager

    clip_image002[5]

     

    Provide the local path where user folders will be created under, type C:\SalesShare, and then click Next.

    clip_image004

    There are 2 options to specify the local path:

    If you have a local path that is configured to be an SMB share, such as a folder redirection share, you can simply select the first option “Select by file share”. For example, as the screenshot shown above, I had one SMB share created on this server, which points to the C:\finshare location. I can simply enable the path “c:\finshare” for sync by select the first radio button.

    If you are creating sync share first, ((without the SMB share configuration), you can provide the local path directly in the second option, which I’m using in the demo.

    Select the user folder format, choose the default user alias, and click Next.

    clip_image006[5]

    There are 2 options you can select from the UI:
    Using user alias. This is selected by default, and it is compatible with other technologies such as folder redirection or home folders

    clip_image007Using alias@domain. This option ensures the uniqueness of the folder name for users across domains.
    clip_image008

     

    Admin can choose a subfolder “Document” as the folder to be synced to devices, and leaving other folders still functioning with Folder redirection. To do so, check “Sync only the following subfolder”

    clip_image009[4]

    Sync only the following subfolder: By default, all the folders/files under the user folder will be synced to the devices. This checkbox allows the admin to specify a single subfolder to be synced to the devices. For example, the user folder might contain the following folders as part of a Folder Redirection deployment:

    clip_image010

    Provide the sync share name and description (optional), and click Next

    clip_image012

     

    Assign security groups for sync share access by clicking the Add button and entering the Sales security group (created in section User and Security group creation). Then click Next

    clip_image014

    By default, the admin will not be able to access the user data on the server. If you want to have admin access to user data, uncheck the “Disable inherited permissions and grant users exclusive access to their files” checkbox.

    As part of this assignment, the share creation will modify the NTFS folder permission on the sync root, to ensure users in the security group can create their folders, and access documents to only their own folder.

    The table below shows the permissions which will be configured as part of the sync share creation:

    User account

    Minimum permissions required (configured by Sync Share setup)

    Creator/Owner

    Full control, subfolders and files only

    Security group of users needing sync to the share

    List Folder/Read data, Create Folders/Append data, Traverse folder/execute file, Read/Write attributes – this folder only

    Local system

    Full control, this folder, subfolders and files

    Administrator

    Read, this folder only

     

    Define device policies, and then click Next.

    clip_image016[5]

    Encryption policies request that the documents in Work Folders on the client devices be encrypted with the Enterprise ID. The Enterprise ID by default is the user primary SMTP email address, (aka proxyAddresses of the user object in AD). Using a different key to encrypt Work Folders ensures that personal documents on the same device are preserved if an admin wipes Work Folders on the device (for example, if the device is stolen).

    The password policy enforces the following configuration on user PCs and devices:

    • Minimum password length of 6
    • Autolock screen set to be 15 minutes or less
    • Maximum password retry of 10 or less

    If the device doesn’t meet the policy, user will not be able to configure the Work Folders.

    The policy enforcement on the client devices is not in the Preview release. It will be in the RTM release.

    Check the sync share settings, and click Create.

    clip_image018[5]

    Using PowerShell cmdlet

    PS C:\>New-SyncShare SalesShare –path C:\SalesShare –User Contoso\Sales -RequireEncryption $true –RequirePasswordAutoLock $true

    Enable SMB access

    If you want to enable the sync share for SMB access, you can open the Windows Explorer, and navigate to the “This PC” location. Right click on the “SalesShare” folder, and select “Share with” -> “Specific people”. Add Contoso\Sales and change the permission level to “Read/Write”, as shown below:

    clip_image020

    Complete the UI by clicking on “Share” button.

    Now user can also access the dataset through UNC path.

    Once the server is enabled for SMB access, server will check for data changes every 5 minutes by default. You can change the enumeration time by running the following cmdlet on the server:

    PS C:\> Set-SyncServerSetting -MinimumChangeDetectionMins <NumberInMinutes>

    It increases the server load each time the server enumerates files to detect changes, on the other hand, the changes done locally on the server or through SMB can only be detected at each enumeration time. It is a balance act to tolerate change detection delay and the load server can handle. Enumeration gets more expensive as the number of files increases under the user folder. If you want to decrease the setting, make sure you test it on the server in your environment first. We are currently evaluating the enumeration performance, and will post guidance in the area later. If you don’t want users to change files directly on the server or through SMB or NFS, you should consider disable running ChangeDetection on the server.

    Client setup

    Since we prepared 2 VMs as the client machines, you will need to repeat the following setup on both client machines.

    Lab testing specific settings

    Caution: The following regkey settings are only for lab testing, and should not be configured in production environment.

      1. Allow unsecure connection

    By default, client always connect to the server using SSL, which requires the server to have SSL certificate installed and configured. In lab testing, you can configure the client to use http by running the following command on the client:

    Reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WorkFolders /v AllowUnsecureConnection /t REG_DWORD /d 1

    Running unsecure connection is not recommended, and a follow up post will illustrate the procedures to configure certificate on the server.

    1. Converting from Email address to Server Url

    When user enters the email address, such as Jane@contoso.com, the client will construct the Url as https://WorkFolders.contoso.com, and use that Url to communicate with the server. In production environment, you will need to publish the Url for the client to communicate to the server through reverse proxy. In testing, we’ll bypass the Url publication by configure the following regkey:

    Reg add HKCU\Software\Microsoft\Windows\CurrentVersion\WorkFolders /v ServerUrl /t REG_SZ /d http://syncSvr.contoso.com

    With this key set, the client will bypass the email address user entered, and use the Url in the regkey to establish the sync partnership.

    Also note that, this key will not be present in the RTM release.

    WorkFolders setup

    Steps

    Screenshots

    Description

    User can find the setup link in Control Panel->System and Security->Work Folders

     

    clip_image021[1]

     

    Provide the user email address, and then click Next.

    clip_image022

    If the client machine is domain joined, user will not be prompted for credentials.

    Specify where to store Work Folders on the device

    clip_image023[1]

    Users cannot change the Work Folder location in the preview release of Windows 8.1. This will be changed in the final RTM release.

    Consent to the device policy, and then click Setup Work Folders.

    clip_image024

     

    Work Folders is now configured on the device. You can open File Explorer to see Work Folders.

     

    clip_image026

     

    Once you have configured both client machines, user can access the documents under the Work Folders location from any devices, and the documents will be kept in sync by Work Folders.

    Sync in action

    To test Work Folders, create a document (using Notepad or any other app) on one of the client machines and save the document under the Work Folders location, also, create a document on the other client machine, save it under the Work Folders. In a few moments, you should see the document get synced on both client machines.

    clip_image027[1]

    In Preview build, the client will sync with the server if there is any changes locally under the Work Folder, and when the client connects to the server, the server will also notify the client for any changes on the server. If client doesn’t have anything changed locally on the client, it will connect to the server every 10 minutes asking for any changes on the server. You can trigger a sync action by creating or modifying a file on the device under the Work Folders.

    Since the sync location was also enabled with SMB access, user can also view the data on computers without Work Folders by typing the UNC path in the explorer:

    clip_image028

    Conclusion

    I hope this blog post helps you get started with Work Folders in your test labs. If you have questions not covered here, please raise it in the comments so that I can address it with upcoming postings. Also, there are some resources on this topic you will find helpful:

    Powershell cmdlets references: http://technet.microsoft.com/en-us/library/dn296644(v=wps.630).aspx

    - Jane

    Storage and File Services Powershell Cmdlets Quick Reference Card For Windows Server 2012 R2 [Preview Edition]

    $
    0
    0

    Hi, my name is Roiy Zysman and I’m a senior Program Manager in the Hybrid Storage Services team.

    Last year, with the introduction of Windows Server 2012, we published a File and Storage Services PowerShell cmdlets quick reference sheet.
    The motivation for this reference card was to provide a set of common Windows PowerShell cmdlet examples that span across different File and Storage Services modules to help simplify performing common tasks.

    To further explain how this reference card can be used in real life, consider the following scenario:
    Amy, the file server administrator, is asked to create a new SMB share for the HR team. While she knows she can accomplish this task using Server Manager’s wizards and dialogs, she prefers to perform this task by running a set of commands on her Windows PowerShell console. From her previous experiences, she knows that this task would require her to execute cmdlets from several different File Services related modules. She’ll probably have to start with Storage cmdlets to provision pools, disks and a volume. Then she’ll use the SMB cmdlets that create an SMB share, followed by the File Server Resource Manager cmdlets to apply quotas, and Automatic Classification and Access-Denied Assistance settings. She then might use the DFS Namespaces cmdlets to add the new share to the org’s shares namespaces. Lastly, she’d probably optimize the storage capacity by using the Data Deduplication cmdlets to turn on data deduplication on the newly created volume. There are even more File and Storage Services modules that might be included in this scenario such as iSCSI Target Server, iSCSI Initiator as well as Failover Clustering if the request demands a robust and highly available file access solution.

    Remembering every cmdlet is not trivial, so to ease up Amy’s job and other file server admins out there, we’ve collected and organized a set of File and Storage Services-related cmdlets into a printable quick reference guide that you can hang on your wall, place it on your desk, or fold up and store in your coat pocket for impressing people at parties.

    We had a lot of feedback from customers that liked the previous version of our reference sheet, so we went ahead and produced a new reference card for the preview version of Windows Server 2012 R2.
    In the new reference card there are two new sections: one for DFS Replication and one for the Work Folders – a new sync technology for Windows Server 2012 R2. It is also worth mentioning that there are new storage cmdlets to manage storage tiers.

    We hope you’ll find the updated version useful and we encourage you to download and print or save a copy to use while you're exploring the File and Storage Services world.

    [Download here]

     

     

    Thanks
    Roiy Zysman

     

     

    Extending Data Deduplication to new workloads in Windows Server 2012 R2

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Data Deduplication and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “Delivering Infrastructure as a Service (IAAS).”  

    In Windows Server 2012 we introduced the new Data Deduplication feature set that quickly became one of standard things to consider when deploying file servers. More space on existing hardware at no cost other than running Windows Server 2012? Seems like a pretty good deal.

    Not to mention we saw great space savings on various types of real-world data at rest. Some of the most common types of data include:

     

    These numbers are based on measuring the savings rates on various customer deployments of Data Deduplication on Windows Server 2012. However, we saw some interesting trends:

    • Customers were adjusting the default policies as to which files to optimize to include more data. By default, Data Deduplication only optimizes files that have not been modified in 5 days. Customers were setting it to optimize files older than 3 days and in many cases to optimize all files regardless of age.
    • Customers were attempting to optimize their running VHD libraries… which of course doesn’t quite work correctly

    In both cases we see people try to put more data under Data Deduplication and to take better advantage of those huge savings seen on static VHD libraries. However, Data Deduplication in Windows Server 2012 was not really designed to deal with data that changes frequently or even is in active use.

    The road to new workloads for Data Deduplication

    The customer feedback we were getting showed a clear need to reduce storage costs in private clouds (see http://blogs.technet.com/b/in_the_cloud/archive/2013/07/31/what-s-new-in-2012-r2-delivering-infrastructure-as-a-service.aspx for an overview of all the other new things around storage) and specifically to extend Data Deduplication for new workloads.

    Specifically we needed to start supporting storage of live VHDs for some scenarios.

    It turns out that there were a few key changes that had to be made to even consider using Data Deduplication for open files:

    • The read performance was pretty good already, but the write performance needed to be improved.
    • The speed at which Data Deduplication optimizes files needed to become faster to keep up with changes (churn) in files.
    • We had to allow open files to be optimized by Data Deduplication (while it was actively being modified)

    We also realized that all of this would take up resources on the server running Data Deduplication. If we were to run this on the same server as the VMs, then we’d be competing with them for resources. Especially memory. So we quickly came to the conclusion that we needed to separate out storage and computation nodes when Data Deduplication was involved with virtualization.

    Of course that meant we had to use a scale out file share and therefore needed to support CSV volumes for deduplication.

    Then we came to the question of how fast do we have to get all of these things working to be successful? Well… as fast as possible. However, we know that Data Deduplication has to incur some costs. So we needed real goals. It turns out that deciding that you are fast enough for all virtualization scenarios is very difficult. So we decided to take a first step with a virtualization workload that was well understood:

    Data Deduplication in Windows Server 2012 R2 would support optimization of storage for Virtual Desktop Infrastructure (VDI) deployments as long as the storage and compute nodes were connected remotely.

    What’s new in Data Deduplication in Windows Server 2012 R2 Preview

    With the Windows Server 2012 R2 Preview, Data Deduplication is extended to the remote storage of the VDI workload:

     

    CSV Volume support
     
    Faster deduplication of data
     
    Deduplication of open (in use) files
     
    Faster read/write performance of deduplicated files

     

    Is Hyper-V in general supported with a Deduplicated volume?

    We spent a lot of time to ensure that Data Deduplication performs correctly on general virtualization workloads. However, we focused our efforts to ensure that the performance of optimized files is adequate for VDI scenarios. For non-VDI scenarios (general Hyper-V VMs), we cannot provide the same performance guarantees.

    As a result, we do not support deduplication of arbitrary in use VHDs in Windows Server 2012 R2. However, since Data Deduplication is a core part of the storage stack, there is no explicit block in place that prevents it from being enabled on arbitrary workloads.

    What benefits do we get from using Data Deduplication with VDI?

    We will start with the easy one: You will save space! And of course, saving space translates into saving money. Deduplication rates for VDI deployments can range as high as 95% savings. This allows for deployments of SSD based volumes for VDI, leveraging all the improved IO characteristics while mitigating their low capacity.

    This also allows for simplification of the surrounding infrastructure such as JBODs, cooling, power, etc.

    On the other hand, due to the fact that Data Deduplication consolidates files, more efficient caching mechanisms are possible. This results in improving the IO characteristics of the storage subsystem for some types of operations. So not only does deduplication save money, it can make things go faster.

    As a result of these, we can often stretch the VM capacity of the storage subsystem without buying additional hardware or infrastructure.

    Wrap-up

    Data Deduplication in Windows Server 2012 R2 enables optimization of live VHDs for the VDI workloads and allows for deduplicated CSV volumes. It also significantly improves the performance of optimization as well as IO on optimized files. This will allow better utilization of existing storage subsystems for general file servers as well as for VDI storage and simplify future infrastructure investments.

    We hope you find these new capabilities as exciting as we find them and look forward to hearing from you.

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

    Deploying Data Deduplication for VDI storage in Windows Server 2012 R2

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Data Deduplication and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “Delivering Infrastructure as a Service (IAAS).”

    With the Windows Server 2012 R2 Preview, Data Deduplication is extended to the remote storage of the VDI workload:

     

    CSV Volume support

     

    Faster deduplication of data

     

    Deduplication of open (in use) files

     

    Faster read/write performance of deduplicated files

     

    See http://blogs.technet.com/b/filecab/archive/2013/07/31/extending-data-deduplication-to-new-workloads-in-windows-server-2012-r2.aspx for more details.

    Why do I want to use Data Deduplication with VDI?

    To start with: You will save space! Deduplication rates for VDI deployments can range as high as 95% savings. Of course that number will vary depending on the amount of user data, etc and it will also change over the course of any one day.

    Data Deduplication optimizes files as a post processing operation. That means, as data is added over the course of a day, it will not be optimized immediately and take up extra space on disk. Instead, the new data will be processed by a background deduplication job. As a result, the optimization ratio of a VDI deployment will fluctuate a bit over the course of a day, depending on home much new data is added. By the time next optimization is done, savings will be high again.

    Saving space is great on its own, but it has an interesting side effect. Volumes that were always too small, but had other advantages are suddenly viable. One such example are SSD volumes. Traditionally, you had to deploy very many of these drives to reach volume sizes that were viable for a VDI deployment. This was of course expensive for the disks, but also considering the increased needs for JBODs, power, cooling, etc. With Data Deduplication in the picture SSD based volumes can suddenly hold vastly more data and we can finally utilize more of their IO capabilities without incurring additional infrastructure costs.

    On the other hand, due to the fact that Data Deduplication consolidates files, more efficient caching mechanisms are possible. This results in improving the IO characteristics of the storage subsystem for some types of operations.

    As a result of these, we can often stretch the VM capacity of the storage subsystem without buying additional hardware or infrastructure.

    How do I deploy VDI with Data Deduplication in Windows Server 2012 R2 Preview then?

    This turns out to be relatively straight forward, assuming you know how to setup VDI, of course. The generic VDI setup will not be covered here, but rather we will cover how Data Deduplication changes things. Let’s go through the steps:

    1. Machine deployment

    First and foremost, to deploy Data Deduplication with VDI, the storage and compute responsibilities must be provided by separate machines.

     

    The good news is that the Hyper-V and VDI infrastructure can remain as it is today. The setup and configuration of both is pretty much unaltered. The exception is that all VHD files for the VMs must be stored on a file server running Windows Server 2012 R2 Preview. The storage on that file server may be directly attached disks or provided by a SAN/iSCSI.

    In the interest of ensuring that storage stays available, the file server should ideally be clustered with CSV volumes providing the storage locations for the VHD files.

    2. Configuring the File Server

    Create a new CSV volume on the File Server Cluster using your favorite tool (we would suggest System Center Virtual Machine Manager). Then enable Data Deduplication on that volume. This is very easy to do in PowerShell:

    Enable-DedupVolume C:\ClusterStorage\Volume1 –UsageType HyperV

    This is basically the same way Data Deduplication is enabled for a general file share, however it ensures that various advanced settings (such as whether open files should be optimized) are configured for the VDI workload.

    In the Windows Server 2012 R2 Preview one additional step has to be done that will not be required in the future. The default policy for Data Deduplication is now to only optimize files that are older than 3 days. This of course does not work for open VHD files since they are constantly being updated. In the future, Data Deduplication will address this by enabling “Partial File Optimization” mode, in which it optimizes parts of the file that are older than 3 days. To enable this mode in the Preview, run the following command

    Set-DedupVolume C:\ClusterStorage\Volume1 –OptimizePartialFiles

    3. VDI deployment

    Deploy VDI VMs as normal using the new share as the storage location for VHDs.

    With one caveat.

    If you made a volume smaller than the amount of data you are about to deploy on it, you need some special handling. Data Deduplication runs as a post-processing operation.

    Let us say we want to deploy 120GB of VHD files (6 VHD files of 20GB each) onto a 60 GB volume with Data Deduplication enabled.

    To do this, deploy VMs onto the volume as they will fit leaving at least 10GB of space available. In this case, we would deploy 2 VMs (20GB + 20GB + 10GB < 60GB). Then run a manual deduplication optimization job:

    Start-DedupJob C:\ClusterStorage\Volume1 –Type Optimization

    Once this completes, deploy more VMs. Most likely, after the first optimization, there will be around 10GB of space used. That leaves room for another 2 VMs. Deploy these 2 VMs and repeat the optimization run.

    Repeat this procedure until all VMs are deployed. After this the default background deduplication job will handle future changes.

    4. Ongoing management of Data Deduplication

    Once everything is deployed, managing Data Deduplication for VDI is no different than managing it for a general file server. For example, to get optimization savings and status:

    Get-DedupVolume | fl
    Get-DedupStatus | fl
    Get-DedupJob

    It may at times occur that a lot of new data is added to the volume and the standard background task is not able to keep up (since it stops when the server gets busy). In that case you can start a “throughput” optimization job that will simply keep going until the work is done:

    Start-DedupJob D: –Type Optimization

    Wrap-up

    Overall, deploying Data Deduplication for VDI is relatively simple operation, though it may require some additional planning along the way.

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.


    What’s new for SMI-S in Windows Server 2012 R2

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers storage management through SMI-s and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: What’s New in 2012 R2: IaaS Innovations.”

     

     

    Although my day job is no longer working on SMI-S, I did want to let folks know that a lot of work went into the Standards-Based Storage Management Service (aka Storage Service) in the upcoming Windows Server release. Here are the highlights:

    Discovery - Discovery is what happens when you register (Register-SmisProvider) or update (Update-StorageProviderCache): the storage service tries to find out as much information as you wanted (there are four levels of discovery) and that information resides in the service’s cache so you don’t need to constantly go out to the provider to get the data. In Windows Server 2012, the way the information was discovered was through “walking the model”, which is to say, starting with an object and then following associations to get additional information. Unfortunately, this can take a long time on all but the smallest storage configurations.

    For Windows Server 2012 R2, the mechanisms have been changed. Instead of model-walking, the service will do enumerations of objects and then figure out (in memory) how they inter-relate. This turns out to decrease discovery times up to 90%! We worked with vendors to make sure providers will work well with this change, and for some of those vendors, you will need to get updated providers.

    Updates (through Indications) - I recently blogged about the indication support for SMI-S. The internals of this are much improved in Windows Server 2012 R2 and more provider changes will be caught through indications so that rediscovery won’t be needed as often. The information in the older blog still applies, except that the firewall rule will already be in place (the provided script will still run unchanged).

    Secure connections (using HTTPS) - In my first posting, I advised against using Mutual Authentication with the storage service. For Windows Server 2012 R2, this has been improved. This applies to indications as well as normal SMI-S traffic. Follow the Indications blog for configuration information (you need to have the certificates in place). Not all providers will work well with mutual auth. I will post more about security in the near future.

    Resiliency Settings - When creating pools and volumes using the Storage Management cmdlets, you could easily specify various parameters that were just never going to work. SMI-S is too generic here. This has been simplified - stick with vendor defined settings and you should be fine. For Windows Server 2012 R2, we’ll give you an error if you try to override any of the parameters.

    Pull Operations - One of the efficiency improvements is to change how enumerations are done by using a newer mechanism called “pull” operations. (You can read more about this here.) This allows chunking of the data coming back from the provider, and generally will lower the memory required on the provider side for large configurations. This works now with EMC providers; others will update in the future. To enable pull operations, you will need to modify the registry value

    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\Current Version\Storage Management\PullOperationCount

    Set it to something like 100 which tells the provider to send 100 instances of a particular class at a time. Through PowerShell, this would look like:

    001
     Set-ItemProperty -Path "HKLM:SOFTWARE\Microsoft\Windows\CurrentVersion\Storage Management" -Name PullOperationCount -Value 100

    To turn it off, just set the value back to 0. (Leaving it enabled for a provider that does not support pull operations will only cause a small performance degradation so you might want to leave this set if you have more than one provider and either supports pull operations.)

    Registering a provider - You can now register a provider that does not have any arrays to manage (yet). An Update-StorageProviderCache cmdlet will find any new devices at a later time. More significantly, if for some reason the provider lost contact with an array, this would result in errors that could be difficult to recover from.

    Node/Port address confusion- In Windows Server 2012, we had these defined backwards from what is mandated by standards or used by things like the iSCSI initiator. This has been corrected but may require provider updates because we found some bugs in existing implementations.

    Snapshot/Volume deletion - Under some conditions, it would not be possible to delete a volume or a snapshot. For example, we sometimes thought a volume was exposed to a host when it wasn’t, or a snapshot was in the wrong state for deletion. This has been improved.

    Snapshot target pool - Specifying a -TargetStoragePoolName for snapshots is now supported, that is, if the provider/array allows it. However, be careful when you have more than one array with pools of the same name (which might be common).

    Masking operations. There have been cases where masking/unmasking operations can take a long time to complete, particularly when SC VMM issues multiple requests in parallel; these could result in timeouts. Also with the current SMI-S model, masking operations performed by Windows might require multiple method calls to providers. Windows Server 2012 R2 now uses jobs for such operations instead of making all masking operations synchronous.

     

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

    DFS Replication in Windows Server 2012 R2: Revenge of the Sync

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers DFSR and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: IaaS Innovations.”

    Hi folks, Ned here again. You might have read my post on DFSR changes in Windows Server 2012 back in November and thought to yourself, “This is ok, but come on… this took three years? I expected more.”

    We agreed.

    Background

    Windows Server 2012 R2 adds substantial features to DFSR in order to bring it in line with modern file replication scenarios for IT pros and information workers on enterprise networks. These include database cloning in lieu of initial sync, management through Windows PowerShell, file and folder restoration from conflicts and preexisting stores, substantial performance tuning options, database recovery content merging, and huge scalability limit changes.

    1

    Today I’ll talk at a high level about how your business can benefit from this improved architecture.It assumes that you have a previous working knowledge of DFSR, to include basic replication concepts and administration using the previous tools DfsMgmt.msc or DfsrAdmin.exe and Dfsrdiag.exe. Everything I discuss below you can do right now with the Windows Server 2012 R2 Preview.

    I have a series of deeper articles as well to get you rolling with more walkthroughs and architecture, as well as plenty of TechNet for the blog-a-phobic. Currently these are:

    DFS Replication Initial Sync in Windows Server 2012 R2: Attack of the Clones

    DFS Replication in Windows Server 2012 R2: If You Only Knew the Power of the Dark Shell

    DFS Replication in Windows Server 2012 R2: Restoring Conflicted, Deleted and PreExisting files with Windows PowerShell

    Database Cloning

    DFSR Database Cloning is an optional alternative to the classic initial sync process introduced in Windows Server 2003 R2. DFSR spends most of its time in initial sync—even when administrators preseed files on the peer servers—examining metadata, staging files, and exchanging version vectors. This can make setup, disaster recovery, and hardware replacement very slow. Multi-terabyte data sets are typically infeasible due to the extended setup times; the estimate for a 100TB dataset is 159 days to complete initial sync on a LAN, if performance is linear (spoiler alert: it’s not).

    DB cloning bypasses this process. At a high level, you:

    1. Build a primary server with no partners (or use an existing server with partners)

    2. Clone its database

    3. Preseed the data on N servers

    4. Build N servers using that database clone

    The existing initial sync portion of DFSR is now instantaneous if there are no differences. If there are differences, DFSR only has to catch up the real delta of changes as part of a shortened initial sync process.

    Cloning provides three levels of file validation during the export and import processing. These ensure that if you are allowing users to alter data on the upstream server while cloning is occurring, files are later reconciled on the downstream.

    • None - No validation of files on source or destination server. Fastest and most optimistic. Requires that you preseed data perfectly and do not allow any modification of data during the clone processing on either server.
    • Basic - (Default behavior). Hash of ACL stored in the database record for each file. File size and last modified date-time stored in the database record for each file. Good mix of fidelity and performance.
    • Full - Same hashing mechanism used by DFSR during normal operations. Hash stored in database record for each file. Slowest but highest fidelity (and still faster than initial sync)

    Some early test results

    What does this mean in real terms? Let’s look at a test run with 10 terabytes of data in a single volume comprising 14,000,000 files:

    “Classic” initial sync

    Time to convergence

    Preseeded

    ~24 days

    Now, with DB cloning:

    Validation Level

    Time to export

    Time to import

    Improvement %

    2 – Full

    9 days, 0 hours

    5 days, 10 hours

    40%

    1 – Basic

    2 hours, 48 minutes

    9 hours, 17 minutes

    98%

    0 – None

    1 hour, 13 minutes

    6 hours, 8 minutes

    99%

    With the recommended Basic validation, we’re down to 12 hours! Our 64TB tests with 70 million files only take a couple days! Our 500GB/100,000 file small-scale tests finish in 3 minutes! I like exclamation points!

    The Export-DfsrClone provides sample robocopy command-line at export time. You are free to preseed data any way you see fit (backup and restore, robocopy, removable storage, snapshot, etc.) as long as the hashes match and the file security/data stream/alternate data stream copy intact between servers.

    You manage this feature using Windows PowerShell. The cmdlets are:

    Export-DfsrClone
    Import-DfsrClone
    Get-DfsrCloneState
    Reset-DfsrCloneState

    I have a separate post coming with a nice walk through on this feature.

    Wait – did I say DFSR Windows PowerShell? Oh yeah.

    Windows PowerShell and WMIv2

    With Windows Server 2012 and prior versions, file server administrators do not have modern object-oriented Windows PowerShell cmdlets to create, configure and manage DFS Replication. While many of the existing command line tools provide the ability to administer a DFS Replication server and a single replication group, building advanced scripting solutions for multiple servers often involves complex output file parsing and looping.

    Windows Server 2012 R2 adds a suite of 42 Windows PowerShell cmdlets built on a new WMIv2 provider. Businesses benefit from a complete set of DFSR Windows PowerShell cmdlets in the following ways:

    1. Allows the switch to modern Windows PowerShell cmdlets as your “common language” for managing enterprise deployments.

    2. Can develop and deploy complex automation workflows for all stages of the DFSR life cycle, including provisioning, configuring, reporting and troubleshooting.

    3. Allows creation of new graphical or script-based wrappers around Windows PowerShell to replace use of the legacy DfsMgmt snap-in, without the need for complex API manipulation.

    List all DFSR cmdlets

    To examine the 42 new cmdlets available for DFSR:

    PS C:\> Get-Command –Module DFSR

    For further output and explanation, use:

    PS C:\> Get-Command –Module DFSR | Get-Help | Select-Object Name, Synopsis | Format-Table -Auto

    We made sure to document every single DFSR Windows PowerShell cmdlet online with more than 80 sweet examples, before RTM!

    Create a new two-server replication group and infrastructure

    Just to get your juices flowing, you can use DFSR Windows PowerShell to create a simple two-server replication configuration using the F: drive, with my two sample servers SRV01 and SRV02:

    PS C:\> New-DfsReplicationGroup -GroupName "RG01" | New-DfsReplicatedFolder -FolderName "RF01" | Add-DfsrMember -GroupName "RG01" -ComputerName SRV01,SRV02

    PS C:\> Add-DfsrConnection -GroupName "RG01" -SourceComputerName SRV01 -DestinationComputerName SRV02

    PS C:\> Set-DfsrMembership -GroupName "RG01" -FolderName "RF01" -ContentPath "F:\RF01" -ComputerName SRV01 -PrimaryMember $True

    PS C:\> Set-DfsrMembership -GroupName "RG01" -FolderName "RF01" -ContentPath "F:\RF01" -ComputerName SRV02

    PS C:\> Get-DfsrMember | Update-DfsrConfigurationFromAD

    Some slick things happening here, such as creating the RG, RF, and members all in a single step, only having to run one command to create connections in both directions, and even polling AD on all computers at once! I have a lot more to talk about here – things like wildcarding, collections, mass edits, multiple file hashing; this is just a taste. I have a whole new post on this you can see here.

    Performance Tuning

    Microsoft designed DFSR initial sync and ongoing replication behaviors in Windows Server 2003 R2 for the enterprises of 2005: smaller files, slower networks, and smaller data sets. Eight years later, much more data in larger files over wider networks have become the norm.

    Windows Server 2012 R2 modifies two aspects of DFSR to allow new performance configuration options:

    • Cross-file RDC toggling
    • Staging minimum file size

    Cross-File RDC Toggling

    Remote Differential Compression (RDC) takes a staged and compressed copy of a file and creates MD-4 signatures based on “chunks” of files.

    2

    Mark, I stole your pretty diagram and owe you one beer.

    When a user alters a file (even in the middle), DFSR can efficiently see which signatures changed and then send along the matching data blocks. E.g., a 50MB document edited to change one paragraph only replicates a few KB.

    Cross-file RDC takes this further by using special hidden sparse files (located in <drive>:\system volume information\dfsr\similaritytable_x and idrecordtable_x) to track all these signatures. With them, DFSR can use other similar files that the server already has to build a copy of a new file locally. DFSR can use up to five of these similar files. So if an upstream server decides “I have file X and here are its RDC signatures”, the downstream server can decide “I don’t have file X. But I do have files Y and Z that have some of the same signatures, so I’ll grab data from them locally and save having to request all of file X.” Since files are often just copies of other files with a little modification, DFSR gains considerable over-the-wire efficiency and minimizes bandwidth usage on slower, narrower WAN links.

    The downside to cross-file RDC is that over time with many millions of updates and signatures, DFSR may see increased CPU and disk IO while processing similarity needs. Additionally, when replicating on low-latency, high-bandwidth networks like LANS and high-end WANs, it may be faster to disable RDC and Cross-File RDC and simply replicate file changes without the chunking operations. Windows Server 2012 R2 offers this option using Set-DfsrConnection and Add-DfsrConnection Windows PowerShell cmdlets with the–DisableCrossFileRdc parameter.

    Staging File Size Configuration

    DFSR creates a staging folder for each replicated folder. This staging folder contains the marshalled files sent between servers, and allows replication without risk of interruption from subsequent handles to the file. By default, files over 256KB stage during replication, unless RDC is enabled and using its default minimum file size, in which case files over 64KB are staged.

    When replicating on low-latency, high-bandwidth networks like LANS and high-end WANs, it may be faster to allow certain files to replicate without first staging. If users do not frequently reopen files after modification or addition to a content set – such as during batch processing that dumps files onto a DFSR server for replication out to hundreds of nodes without any later modification – skipping the RDC and the staging process can lead to significant performance boosts. You configure this using Set-DfsrMembership and the –MinimumFileStagingSize parameter.

    Database Recovery

    DFSR can suffer database corruption when the underlying hardware fails to write the database to disk. Hardware problems, controller issues, or write-caching preventing flushing of data to the storage medium can cause corruption. Furthermore, when the DFSR service does not stop gracefully – such as during power loss to the underlying operating system – the database becomes “dirty”.

    DB Corruption Merge Recovery

    When DFSR on Windows Server 2012 and older operating systems detects corruption, it deletes the database and recreates it without contents. DFSR then walks the file system and repopulates the database with each file fenced FRS_FENCE_INITIAL_SYNC (1). Then it triggers non-authoritative initial sync inbound from a partner server. Any file changes made on that server that had not replicated outbound prior to the corruption move to the ConflictAndDeleted or PreExisting folders, and end-users will perceive this as data loss, leading to help desk calls. If multiple servers experienced corruption – such as when they were all on the same hypervisor host or all using the same malfunctioning storage array –all servers may stop replicating, as they are all waiting on each other to return to a normal state. If the writable server with corruption was replicating with a read-only server, the writable server will not be able to return to a normal state.

    In Windows Server 2012 R2, DFSR changes its DB corruption recovery behavior. It deletes and recreates the database, then walks the file system and populates the DB with all file records. All files are fenced with the FRS_FENCE_DEFAULT (3) flag though, marking them as normal. The service then triggers initial sync. When subsequent version vector sync reconciles the new DB content with a remote partner, DFSR handles conflicts in the usual way (last writer/creator wins) – since most (if not all) records are marked normal already though, there is no need for conflict handling on matching records. If the remote partner is read-only, DFSR skips attempting to pull changes from the remote partner (since none can come), and goes back to a healthy state.

    DB Dirty Shutdown Recovery

    DFSR on Windows Server 2012 and Windows Server 2008 R2 detects dirty database shutdown and pauses replication on that volume, then writes DFSR event log warning 2213:

    Warning    
    5/1/2013 13:15   
    DFSR 
    2213 
    None
     

    "The DFS Replication service stopped replication on volume C:. This occurs when a DFSR JET database is not shut down cleanly and Auto Recovery is disabled. To resolve this issue, back up the files in the affected replicated folders, and then use the ResumeReplication WMI method to resume replication. 

     

    Additional Information:

    Volume: C:

    GUID: <some GUID> 

     

    Recovery Steps

    1. Back up the files in all replicated folders on the volume. Failure to do so may result in data loss due to unexpected conflict resolution during the recovery of the replicated folders.

    2. To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig class. For example, from an elevated command prompt, type the following command:

    wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="<some GUID>" call ResumeReplication

    Until you manually resume replication via WMI or disable this functionality via the registry, DFSR does not resume. When resumed, DFSR performs operations similar to DB corruption recovery, marking the files normal and synchronizing differences. The main problem here with this strategy is far too many people were missing the event and not noticing that replication wasn’t running anymore. Another good reason to monitor your DFSR servers. (although in this case, there is no specific warning for 2213 events - there would be various other warnings about backlogs, failure to replicate, etc. from the other servers though, and you can add a custom event. Since this 2213 was added out of band and is no longer on by default, it rather slipped through the MP cracks; we'll try to get it into an updated MP someday).

    In Windows Server 2012 R2, DFSR role installation sets the following registry value by default (and if not set to a value, the service treats it as set):

    Key: HKey_Local_Machine\System\CurrentControlSet\Services\DFSR\Parameters

    Value [DWORD]: StopReplicationOnAutoRecovery

    Data: 0

    We performed code reviews to ensure that no issues with dirty shutdown recovery would lead to data loss; we released one hotfix for previous OSes based on this (see KB: http://support.microsoft.com/kb/2780453) but find further issues here.

    Furthermore, the domain controller SYSVOL replica was special-cased so that if it is the only replica on a specific volume and that volume suffers a dirty shutdown, SYSVOL always automatically recovers regardless of the registry setting. The AD admins who don’t know or care that DFSR is replicating their SYSVOL no longer have to worry about such things.

    Preserved File Recovery (The Big Finish!)

    DFSR uses a set of conflict-handling algorithms during initial sync and ongoing replication to ensure that the appropriate files replicate between servers.

    1. During non-authoritative initial sync, cloning, or ongoing replication: files with the same name and path modified on multiple servers move to the following folder on the losing server: <rf>\Dfsrprivate\ConflictAndDeleted

    2. Initial sync or cloning: files with the same name and path that exist only on the downstream server go to <rf>\Dfsrprivate\PreExisting

    3. During ongoing replication: files deleted on a server move to the following folder on all other servers: <rf>\Dfsrprivate\ConflictAndDeleted

    The ConflictAndDeleted folder has a 4GB first in/first out quota in Windows Server 2012 R2 (660MB in older operating systems). The PreExisting folder has no quota. When content moves to these folders, DFSR tracks it in the ConflictAndDeletedManifest.xml and PreExistingManifest.xml. DFSR deliberately mangles all files and folders in the ConflictAndDeleted folder with version vector information to preserve uniqueness. DFSR deliberately mangles the top-level files and folders in the PreExisting folder with version vector information to preserve uniqueness. Previous operating systems did not provide a method to recover data from these folders, and required use of out-of-band script options like RestoreDfsr.vbs (I am rather embarrassed to admit that I wrote that script; my excuse is that it was supposed to be a quick fix for a late night critsit and was never meant to live on for years. Oh well).

    Windows Server 2012 R2 now includes Windows PowerShell cmdlets to recover this data. These cmdlets offer the option to either move or copy files, restore to original or a new location, restore all versions of a file or just the latest, as well as perform inventory operations.

    A few samples

    To see conflicted and deleted files on the H:\rf04 replicated folder:

    PS C:\> Get-DfsrPreservedFiles –Path h:\rf04\DfsrPrivate\ConflictAndDeletedManifest.xml

    Let’s get fancier. To see only the conflicted and deleted DOCX files and their preservation times:

    PS C:\> Get-DfsrPreservedFiles –Path H:\rf04\DfsrPrivate\ConflictAndDeletedManifest.xml | Where-Object path -like *.docx | Format-Table path,preservedtime -auto -wrap

    How about if we restore all files from the PreExisting folder, moving them rather than copying them, placing them back in their original location, super-fast:

    PS C:\> Restore-DfsrPreservedFiles –Path H:\rf04\DfsrPrivate\PreExistingManifest.xml -RestoreToOrigin

    Slick!

    Summary

    We did a ton of work in DFSR in the past few months in order to address many of your long running concerns and bring DFSR into the next decade of file replication; I consider the DB cloning feature to be truly state of the art for file replication or synchronization technologies. We hope you find all these interesting and useful. There are more new blog posts on cloning, Windows PowerShell, reliability, and more here:

    http://blogs.technet.com/b/filecab/archive/2013/08/20/dfs-replication-in-windows-server-2012-r2-if-you-only-knew-the-power-of-the-dark-shell.aspx

    http://blogs.technet.com/b/filecab/archive/2013/08/21/dfs-replication-initial-sync-in-windows-server-2012-r2-attack-of-the-clones.aspx

    http://blogs.technet.com/b/filecab/archive/2013/08/23/dfs-replication-in-windows-server-2012-r2-restoring-conflicted-deleted-and-preexisting-files-with-windows-powershell.aspx

     

     

    - Ned “there were no prequels” Pyle

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

    iSCSI Target Server in Windows Server 2012 R2

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers iSCSI Target Server and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: What’s New in 2012 R2: IaaS Innovations.”

     

    iSCSI Target Server made its first in-box appearance in Windows Server 2012 as a feature, from being a separate downloadable in prior releases. Now in Windows Server 2012 R2, iSCSI Target Server ships with two sets of very cool feature enhancements (technically, note that this blog post applies specifically to Windows Server 2012 R2 Preview and is subject to change in future releases).

    At the high-level, the two sets of enhancements are:

    1. Virtual Disk Enhancements: Larger, resilient, dynamically-growing SCSI Logical Units (LUs) on iSCSI Target Server
    2. Manageability Enhancements: Private and Hosted Cloud management using SCVMM and iSCSI Target Server-based storage

    Let us go into more detail on each of these below. My objectives with this blog post are two: to help you become familiar with new functionality, and to make you comfortable to quickly using the new iSCSI Target Server hands-on.

    Virtual Disk Enhancements

    Overview

    Let’s start with a quick recap of what a Windows Server iSCSI Target ‘Virtual Disk’ is. Most of you may already remember that the iSCSI Target Server implementation calls its inventory of storage units as “Virtual Disks” – when these Virtual Disks are provisioned and then assigned to an iSCSI Target, the disks become accessible to iSCSI initiators as ‘SCSI Logical Units’ (LUs). iSCSI Target Server administrator can of course control access to the iSCSI Target to allow only certain iSCSI initiators to access it. Application-consistent snapshots are easy to take with the VSS provider that installs on the initiator side. Finally, the administrator can also create multiple iSCSI Targets under the same iSCSI Target Server. Here then is a quick reference to all these steps:

    The good news is that this conceptual framework remains unchanged in Windows Server 2012 R2 although it packs a powerful set of infrastructure changes under the covers. Here is a pictorial that summarizes the Virtual Disk enhancements in the core stack.

     

    WS2012R2 iSCSI Target Architecture

     

    So yes, the big news is that iSCSI Target Server switched to VHDX (VHD 2.0) format in Windows Server 2012 R2. Further, iSCSI Virtual Disks can now also be built off Dynamic VHDX virtual disks. iSCSI snapshots are also supported for Dynamic VHDX-based Virtual Disks.

    In addition to these, our File and Storage Services UI team has done an excellent job in taking advantage of these new enhancements, such that you can simply use Server Manager to start playing with these new features right away. Here is a screen shot of the ‘New iSCSI Virtual Disk’ wizard that perhaps illustrates the enhancements the best:

     

    WS2012 R2 iSCSI Virtual Disk Wizard

     

    We have also made a couple of other key enhancements:

    • Unlike in Windows Server 2012 where iSCSI Target Server always sets FUA on back-end I/Os, iSCSI Target Server now enables Force Unit Access (FUA) on its back-end virtual disk I/O only if the front-end I/O that the iSCSI Target received from the initiator required such a direct medium access. This has the potential to improve performance, assuming of course you have FUA-capable back-end disks or JBODs behind the iSCSI Target Server.
    • Local Mount functionality of iSCSI Virtual Disk snapshots – i.e. Mount-IscsiVirtualDiskSnapshot and Dismount-IscsiVirtualDiskSnapshot– are now deprecated in Windows Server 2012 R2. These cmdlets were typically used for locally mounting a snapshot for target-local usage, e.g. backup. Turns out there is actually a simpler approach to avoid the deprecated functionality – use Export-IscsiVirtualDiskSnapshot cmdlet to create an associated Virtual Disk, and access it through a target-local initiator which can then back up the disk. This simpler approach is feasible because iSCSI Target Server now supports “loopback initiator”, basically initiator and target can both be on the same computer.
    • Maximum number of sessions/target is now increased to 544, and the maximum number of LUs/target has gone up to 276.
    How to get started?

    Good news is that existing “iSCSI Target Block Storage, How To” TechNet guidance for Windows Server 2012 remains unchanged for this release. Follow the same steps documented for setting up the iSCSI Target Server feature. No additional configuration is required for enabling the functionality to exercise this scenario. iSCSI Target Server is ready to use the VHDX format!

    You can either use the Server Manager à File and Storage Services à iSCSI GUI (refer to the previous screen shot), or use the iSCSI Target PS cmdlets documented on TechNet for Windows Server 2012: iSCSI Target Cmdlets in Windows PowerShell

    The most significant difference in Windows Server 2012 R2 is perhaps in New-iSCSIVirtualDisk cmdlet usage. Here is the syntax help for New-iSCSIVirtualDisk:

    New-IscsiVirtualDisk [-Path] <string> [-SizeBytes] <uint64>

    [-Description <string>] [-LogicalSectorSizeBytes <uint32>]

    [-PhysicalSectorSizeBytes <uint32>] [-BlockSizeBytes <uint32>]

    [-ComputerName <string>] [-Credential <pscredential>]

    [<CommonParameters>]

    New-IscsiVirtualDisk [-Path] <string> [[-SizeBytes] <uint64>]

    -ParentPath <string> [-Description <string>]

    [-PhysicalSectorSizeBytes <uint32>] [-BlockSizeBytes <uint32>]

    [-ComputerName <string>] [-Credential <pscredential>]

    [<CommonParameters>]

    New-IscsiVirtualDisk [-Path] <string> [-SizeBytes] <uint64>

    -UseFixed [-Description <string>] [-DoNotClearData]

    [-LogicalSectorSizeBytes <uint32>]

    [-PhysicalSectorSizeBytes <uint32>] [-BlockSizeBytes <uint32>]

    [-ComputerName <string>] [-Credential <pscredential>]

    [<CommonParameters>]

    A handful of things worth noting here to start heading in the right direction:

    • Be sure to use “.vhdx” file extension for the file name, “Path” parameter. VHDX is the only supported format for newly-created iSCSI Virtual Disks!
    • Note that the default iSCSI Virtual Disk persistence format will be Dynamic VHDX. You will need to use the “UseFixed” parameter if you need the Fixed VHDX format.
    • By default, fixed VHDX format zeroes out the virtual disk file on allocation. Recognizing that previous iSCSI Target Server version provided a non-zeroing fixed VHDX behavior, we have provided an option in this release to maintain functional parity through ‘DoNotClearData’ parameter. However, keep in mind that by using this parameter, you may accidentally expose non-zeroed data that you may not want to – so avoid using it if you can! That is the reason Microsoft is no longer making this the default behavior either in GUI or in cmdlets.
    • You will quickly notice is that the new cmdlet now supports a number of new cmdlet parameters – e.g. PhysicalSectorSizeBytes – that are supported by New-VHD cmdlet, this is a direct benefit of the redesigned back-end persistence layer based on VHDX format.
    You should be able to
    • Create large iSCSI Virtual disks (up to 64 TB) using Windows PowerShell and/or GUI
    • Run your desired workloads – including SQL, diskless boot, cluster shared storage – on iSCSI block storage
    • Create dynamic virtual disks and assign them to an iSCSI target

    Manageability Enhancements

    Overview

    Let me again start with a recap on the SMI-S provider, which is the most significant enhancement in this group. iSCSI Target Server had shipped a Windows Server 2012-compatible SMI-S provider with System Center Virtual Machine Manager (SCVMM) 2012 SP1. This required installation from SCVMM media onto an iSCSI Target Server computer, not to mention some additional configuration steps detailed on SCVMM storage configuration guidance on TechNet. Windows Server 2012 R2 now significantly improves the SMI-S provider, and brings it into in-box.

    WS2012 R2 iSCSI Target with SCVMM

     

     

    As shown in the diagram above, SMI-S provider ships and installs as part of the iSCSI Target Server. No separate configuration steps are necessary, SMI-S provider process is auto-instantiated on demand. SMI-S provider presents a different, standards-compliant management object model to SCVMM, but it transparently uses the same WMI provider that Windows PowerShell cmdlets use. SMI-S provider in Windows Server 2012 R2 is designed for dual active iSCSI target clusters whereas the previous version was limited to active-passive clusters. The new SMI-S provider also supports asynchronous job management for long-running Create/Expand/Restore jobs – so these are also cancelable from SCVMM. The following screen shot shows the storage manageability view from SCVMM for an iSCSI Target Server.

     

    WS2012 R2 SCVMM Storage Fabric

     

    Additionally, iSCSI Target Server made a few other key manageability enhancements in Windows Server 2012 R2.

    • Online resizing – grow or shrink – of an iSCSI Virtual Disk is now supported via the new Resize-iSCSIVirtualDisk cmdlet. Previously, only online expansion was supported via Expand-IscsiVirtualDisk cmdlet. Syntax of Resize remains the same as that of Expand. And the Expand cmdlet continues to work as well.
    • Asynchronous long-running operations are supported via cmdlets:
    • You can optionally disable remote management of an iSCSI Target Server using the newly-introduced ‘DisableRemoteManagement’ parameter on the Set-IscsiTargetServerSetting. This would be valuable in scenarios where you are embedding iSCSI Target Server in an appliance with a different management wrapper, and you do not want iSCSI manageability end point to be directly externally exposed.
    • Two new cmdlets are now added to simplify migration experience to Windows Server 2012 R2 – Export-iSCSITargetServerConfiguration and Import-iSCSITargetServerConfiguration.
    How to get started?

    Good news is that existing “iSCSI Target Block Storage, How To” TechNet guidance for Windows Server 2012 remains unchanged for this release. Follow the same steps documented for setting up the iSCSI Target Server feature. No additional configuration is required for enabling the functionality to exercise this scenario. SMI-S provider for iSCSI Target Server is automatically installed and ready for usage. Use the existing TechNet documentation for configuring SCVMM 2012 SP1 or later versions to work with iSCSI Target Server, see Configuring an SMI-S Provider for iSCSI Target Server The only change from that page is that the SMI-S provider is no longer required to be installed from SCVMM media, it is included in-box in Windows Server 2012 R2. Note that the version of SCVMM Server used for SMI-S manageability must be SCVMM 2012 SP1 or SCVMM 2012 R2 – the management server itself could run either on a Windows Server 2012 R2 server or a Windows Server 2012 server.

    Here is the syntax for Stop-IscsiVirtualDiskOperation cmdlet – you can reference the iSCSI Virtual Disk either via the Path parameter, or through the Virtual Disk object retrieved from a different cmdlet such as Get-IscsiVirtualDisk:

    Stop-IscsiVirtualDiskOperation [-Path] <string> [-ComputerName <string>]

    [-Credential <pscredential>] [<CommonParameters>]

    Stop-IscsiVirtualDiskOperation -InputObject <IscsiVirtualDisk>

    [-ComputerName <string>] [-Credential <pscredential>]

    [<CommonParameters>]

    You can use Export-iSCSITargetServerConfiguration with the following syntax. When you run the Export from Windows Server 2012 R2 to export configuration from a down-level OS version computer, be sure to use the ‘ComputerName’ parameter to point to the right down-level computer.

    Export-IscsiTargetServerConfiguration [-Filename] <string>

    [[-ComputerName] <string>] [[-Credential] <string>] [-Force]

    [<CommonParameters>]

    And here is the syntax for Import-iSCSITargetServerConfiguration:

    Import-IscsiTargetServerConfiguration [-Filename] <string>

    [[-ComputerName] <string>] [[-Credential] <string>] [-Force]

    [<CommonParameters>]

    Note that as with the previous PS script, Export/Import operations do not migrate CHAP settings for security reasons – because we do not want to export an encrypted password in clear. So you will need to note passwords through some other means, and re-configure them on the destination computer after finishing the Import operation. Expect a complete migration guide to be available on TechNet in the very near future that will go into a lot more detail on this topic.

    You should be able to
    • Discover iSCSI Target in-box SMI-S provider through SCVMM’s storage automation
    • Carve out storage virtual disks on iSCSI Target, and provision them to a Hyper-V host
    • Create long-running asynchronous jobs for large (>= ~5TB) virtual disk creation, and monitor the status
    • Create multiple Continuously Available (CA) iSCSI target server instances in a single failover cluster using Failover Cluster Manager, and manage them all using a single instance of SCVMM
    • Migrate your existing Windows Server 2008 R2 or Windows Server 2012-based iSCSI Target Server to Windows Server 2012 R2

    Wrap-up

    iSCSI Target Server ships with a lot of exciting new functionality in Windows Server 2012 R2. Here are a couple of other useful pointers:

    iSCSI Target Block Storage

    What’s New for iSCSI Target Server in Windows Server 2012 R2

    iSCSI Target Cmdlets in Windows Server 2012

    To conclude, hope this blog post gave you some insight and familiarity to start using the new iSCSI Target Server right away! Drop me an offline note using the blue “Contact” button on this page with your feedback on how it’s going.

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.


    Work Folders Certificate Management

    $
    0
    0

    In the last blog post, I’ve provided the step by step guide to deploy Work Folders in a test lab environment, where the client is using unsecure connections to the server, which means the server doesn’t require to have SSL certificate. While using unsecure connection simplifies the testing, it is not recommended for production deployment. In this blog, I will provide you a step-by-step guide on configuration of the SSL certificate on the Work Folders server, so that, the client will be able to establish the secure connection to the server using SSL.

    Prerequisites

    You will need the following setup:

    1. Work Folders server set up and configured with at least one sync share. (Let’s name it as SyncSvr.Contoso.com)
    2. One client device running Windows 8.1

    How Work Folders uses certificate?

    The Work Folder client server communication is built using secure HTTP 1.1. According to RFC 2818:

    “In general, HTTP/TLS requests are generated by dereferencing a URI. As a consequence, the hostname for the server is known to the client. If the hostname is available, the client MUST check it against the server's identity as presented in the server's Certificate message, in order to prevent man-in-the-middle attacks.”

    With Work Folders, when a client initiates an SSL negotiation with the server, server sends the certificate to the client, the client will evaluate the certificate, and only continue if the certificate is valid and can be trusted. You can find more details on the SSL handshake here.

    Acquiring a certificate

    In a production deployment, you will need to acquire a certificate from a known certificate authority (CA), such as from VeriSign or obtain a certificate from an online CA in your intranet domain such as your enterprise CA. There are a few things that Work Folders will verify in a server certificate, for example:

    • Expired? : That the current date and time is within the "Valid from" and "Valid to" date range on the certificate.
    • Server identity: That the certificate's "Common Name" (CN) or “Subject Alternative Name” (SAN) matches the host header in the request. For example, if the client is making a request to https://syncsvr.contoso.com, then the CN must also be https://syncsvr.contoso.com.
    • Trusted CA? : That the issuer of the certificate is a known and trusted CA.
    • Revoked? The certificate has been revoked.

    This MSDN page more information on certificate validations: http://technet.microsoft.com/en-us/library/cc700843.aspx 

    There are also a few different types of certificates you could acquire:

    • One certificate with one hostname
    • One certificate with multiple hostnames (SAN certificate)
    • Wild card certificate for a domain (Wild card certificate)

    You can evaluate the options based on:

    1. The number of sync servers you are deploying.
    2. Using Work Folders discovery? If you are planning to use discovery based on the user email address, you will need to publish https://WorkFolders.<domainname>, such as https://WorkFolders.contoso.com, all the sync servers that act as the discovery servers (by default, all sync server can perform discovery), will need to install a multiple hostnames (SAN) certificate which contains the hostname matches to the WorkFolders.contoso.com and the name used to access the sync servers.
    3. Ease of management. Certificates are generally valid for 1 or 2 years — the more certificates you have, the more you need to monitor and renew.
    4. If you only deploy one sync server, you just need one certificate with one hostname on it.

    For example, if you are planning to deploy one sync server, you will need to request a certificate with the following hostname on it: https://WorkFolders.contoso.com

    Getting certificate from a trusted CA

    There are many commercial CAs from which you can purchase the certificate from. You can find the CAs trusted by Microsoft on this page: http://support.microsoft.com/kb/931125

    To purchase certificate, you need to visit the CA’s website, and follow the instructions on the website. Each CA has a different procedure on certificate purchase.

    Testing using Self-signed certificate

    For testing, you can also use a self-signed certificate. To generate self-signed certificate, type the following in a Windows PowerShell session:

    PS C:\> New-SelfSignedCertificate -DnsName <server dns names> -CertStoreLocation cert:Localmachine\My

    For example:

    PS C:\> New-SelfSignedCertificate -DnsName “SyncSvr.Contoso.com”,”WorkFolders.Contoso.com” -CertStoreLocation cert:Localmachine\My

    Certificates from Enterprise CA

    Windows Server Active Directory Certificate Services (AD CS) can issue certificates in a domain environment, you may think of getting the sync server certificate from the AD CS. However, the limitation with certificates issued by AD CS, is that it will not be trusted by the devices which are not domain joined. Work Folders is designed to support BYOD, and it is expected that the devices are not domain joined, therefore user needs to go through some manual steps to trust the Enterprise CA. The instructions are available on this TechNet page: http://technet.microsoft.com/en-us/library/cc750534.aspx.

    Installing a certificate

    Once you get the certificate, you need to install it on the Work Folders server and the reverse proxy server, since both servers will handle the sync request. Copy the certificate to the Work Folders server and the reverse proxy server, and import the certificate:

    PS C:\> Import-PfxCertificate –FilePath <certFile.pfx> -CertStoreLocation cert:LocalMachine\My

    Note that, if you are testing with a self-signed certificate on the Work Folder server, you don’t need to import the certificate again. As the self-signed certificate is already been installed on the local server.

    Configure SSL certificate binding

    To bind the SSL certificate, open an elevated cmdline window (NOT a Windows PowerShell window), and run:

    netsh http add sslcert ipport=0.0.0.0:443 certhash=<Cert thumbprint> appid={CE66697B-3AA0-49D1-BDBD-A25C8359FD5D} certstorename=MY

    Note:

    • The cmd above will bind the certificate to the root (all hostnames on the server) of the server with port 443.
    • You can get the certificate thumbprint by running the following cmdlet:

    PS C:\>Get-ChildItem –Path cert:\LocalMachine\My

    Testing using self-signed certificate

    The steps below are only for testing purposes using the self-signed certificate, because the self-signed certificates are not trusted by client by default. The steps are not necessary if the certificates are purchased from the well-known trusted CAs.

    On the server:

    1. Exports the self-signed certificates from the server. Run the following cmdlets on the sync server:

    PS C:\>$cert = Get-ChildItem –Path cert:\LocalMachine\My\<thumbprint>

    PS C:\>Export-Certificate –Cert $cert –Filepath servercert.p7b –Type P7B

    Note: you can change the –FilePath parameter input to specify a location you want the certificate to be exported to. You will need to remember the location, so in the next step, you can copy the files to the client devices.

    2. Copy the certificate file to the testing client device.

    On the client:

    3. On the client machine, install the certificates, by right click on the certificate file, and select “Install certificate”.

    4. Follow the wizard to install the certificate in the “Trusted Root Certification Authorities”.

    clip_image002

    5. Complete the installation

    Client setup

    1. Open the Control Panel -> System and Security->Work Folders
    2. Provide the email address for the user (for example U1@contoso.com) Or entering the Url directly if Work Folder discovery is not configured in the deployment.
    3. Check the setup completes and client is able to sync files afterwards.

    Certificate monitoring

    Once the certificate in place, you need to monitor it for cases such as certificate renewal.

    This TechNet article provides a good overview and cmdlets to manage certificates:

    http://social.technet.microsoft.com/wiki/contents/articles/14250.certificate-services-lifecycle-notifications.aspx

    Cluster consideration

    Work Folders supports running in failover cluster mode. Below is a quick check list on configuring certificates in clusters:

    1. Acquire the certificate, make sure the hostname matches the Work Folder VCO name, not the node name. The VCO name is the virtual Work Folders can be moved to different nodes in a cluster.
    2. Import the certificate to all the cluster nodes, as described in Installing certificate section.
    3. Bind certificate on each of the cluster nodes, as described in Binding section.
    4. Configure certificates notification on each of the cluster nodes, as described in Monitoring section.

    If you are deploying multiple VCOs in a cluster, you need to make sure the certificate contains all the VCO hostnames as well as the discovery hostname, or using a wildcard certificate.

    For example, in the following setup:

    1. 2 node cluster, with N1 and N2 as the machine names.
    2. Created 2 VCOs (WF1.contoso.com and WF2.contoso.com) to support work folders.
    3. When acquiring certificates, you need to make sure the certificate contains the hostnames: WF1.contoso.com and WF2.contoso.com
    4. And the certificate is installed and bind to both nodes of the cluster (machines N1 and N2), and reverse proxy server.

    Note: the clustering support is not in the Preview release of Server 2012 R2, please make sure to test clustering with the RTM builds.

    Conclusion

    The Work Folders server requires SSL certificate for client to connect by default, and is highly recommended for production deployment. I hope this blog post provides you the details on how you can configure and manage the SSL certificates on the Work Folder servers. If you have questions not covered here, please raise it in the comments so that I can address it with upcoming postings.

    DFS Replication in Windows Server 2012 R2: If You Only Knew the Power of the Dark Shell

    $
    0
    0

    Hi folks, Ned here again. By now, you know that DFS Replication has some major new features in Windows Server 2012 R2. Today we dig into the most comprehensive new feature, DFSR Windows PowerShell.

    Yes, your thoughts betray you

    “IT pros have strong feelings about Windows PowerShell, but if they can be turned, they’d be a powerful ally. “

    - Darth Ned

    It’s not surprising if you’re wary. We’ve been beating the Windows PowerShell drum for years now, but sometimes, new cmdlets don’t offer better ways to do things, only different ways. If you were already comfortable with the old command-line tools or attached to the GUI, why bother learning more of the same? I spent many years in the field before I came to Redmond and I’ve felt this pain.

    As the DFSR development team, we wanted to be part of the solution. It led to a charter for our Windows PowerShell design process:

    1. The old admin tools work against one node at a time – DFSR Windows PowerShell should scale without extensive scripting.

    2. Not everyone is a DFSR expert – DFSR Windows PowerShell should default to the recommended configuration.

    3. Parity with old tools is not enough – DFSR Windows PowerShell should bring new capabilities and solve old problems.

    We then devoted ourselves to this, sometimes arguing late into the night about a PowerShell experience that you would actually want to use.

    Today we walk through all of these new capabilities and show you how, with our combined strength, we can end this destructive conflict and bring order to the galaxy.

    Join me, and I will complete your training

    Let’s start with the simple case of creating a replication topology with two servers that will be used to synchronize a single folder. In the old DFSR tools, you would have two options here:

    1. Run DFSMGMT.MSC, browsing and clicking your way through adding the servers and their local configurations.

    2. Run the DFSRADMIN.EXE command-line tool N times, or run N arguments as part of the BULK command-line option.

    To setup only two servers with DFSMGMT, I have to go through all these dialogs:

    image

    To setup a simple hub and two-spoke environment with DFSRADMIN, I need to run these 12 commands:

    dfsradmin rg new /rgname:"software"

    dfsradmin rf new /rgname:software /rfname:rf01

    dfsradmin mem new /rgname:software /memname:srv01

    dfsradmin mem new /rgname:software /memname:srv02

    dfsradmin mem new /rgname:software /memname:srv03

    dfsradmin conn new /rgname:software /sendmem:srv01 /recvmem:srv02

    dfsradmin conn new /rgname:software /sendmem:srv02 /recvmem:srv01

    dfsradmin conn new /rgname:software /sendmem:srv01 /recvmem:srv03

    dfsradmin conn new /rgname:software /sendmem:srv03 /recvmem:srv01

    dfsradmin membership set /rgname:software /rfname:rf01 /memname:srv01 /localpath:c:\rf01 /isprimary:true

    dfsradmin membership set /rgname:software /rfname:rf01 /memname:srv02 /localpath:c:\rf01

    dfsradmin membership set /rgname:software /rfname:rf01 /memname:srv03 /localpath:c:\rf01

    Facepalm. Instead of making bulk operations easier, the DFSRADMIN command-line has given me nearly as many steps as the GUI!

    Worse, I have to understand that the options presented by these old tools are not always optimal – for instance, DFS Management creates the memberships disabled by default, so that there is no replication. The DFSRADMIN tool requires remembering to create connections in both directions; if I don’t, I have created an unsupported and disconnected topology that may eventually cause data loss problems. These are major pitfalls to DFSR administrators, especially when first learning the product.

    Now watch this with DFSR Windows PowerShell:

    New-DfsReplicationGroup -GroupName "RG01" | New-DfsReplicatedFolder -FolderName "RF01" | Add-DfsrMember -ComputerName SRV01,SRV02,SRV03

    I just added RG, RF, and members with one pipelined command with minimal repeated parameters, instead of five individual commands with repeated parameters. If you are really new to Windows PowerShell, I suggest you start here to understand pipelining. Now:

    Add-DfsrConnection -GroupName "rg01" -SourceComputerName srv01 -DestinationComputerName srv02

    Add-DfsrConnection -GroupName "rg01" -SourceComputerName srv01 -DestinationComputerName srv03

    I just added the hub and spoke connections here with a pair of commands instead of four, as the PowerShell creates bi-directionally by default instead of one-way only. Now:

    Set-DfsrMembership -GroupName "rg01" -FolderName "rf01" -ComputerName srv01 -ContentPath c:\rf01 –PrimaryMember $true

    Set-DfsrMembership -GroupName "rg01" -FolderName "rf01" -ComputerName srv02,srv03 -ContentPath c:\rf01

    Finally, I added the memberships that enable replication and specify the content to replicate, using only two commands instead of three.

    Out of the gate, DFSR Windows PowerShell saves you a significant amount of code generation and navigation. Better yet, it defaults to recommended configurations. It supports collections of servers, not just one at a time. With tabbed autocomplete, parameters always in the same order, mandatory parameters where required, and everything else opt-in, it is very easy to pick up and start working right away. We even added multiple aliases with shortened parameters and even duplicates of DFSRADMIN parameters.

    Watch here as Windows PowerShell autocompletes all my typing and guides me through the minimum required commands to setup my RG:

    (Please visit the site to view this video)

    (If you can't see the preview, go here: http://www.youtube.com/watch?v=LJZc2idVEu4)

    Let’s scale this up - maybe I want to create a 100 server, read-only, hub-and-spoke configuration for distributing software. I can create a simple one-server-per-line text file named “spokes.txt” containing all my spoke servers – perhaps exported from AD with Get-AdComputer– then create my topology with DFSR Windows PowerShell. It’s as simple as this:

    $rg = "RG01"
    $rf = "RF01"
    $hub = "SRV01"
    $spokes = Get-content c:\temp\spokes.txt

    New-DfsReplicationGroup -RG $rg | New-DfsReplicatedFolder -RF $rf | Add-DfsrMember -MemList ($spokes + $hub)

    $spokes | % {Add-DfsrConnection -GroupName $rg -SMem $hub -RMem $_}

    Set-DfsrMembership -RG $rg -RF rf01 -ComputerName $hub -ContentPath c:\rf01 –PrimaryMember $true -force

    Set-DfsrMembership -RG $rg -RF rf01 -ComputerName $spokes -ContentPath c:\rf01 –Force -ReadOnly $true

    Done! 100 read-only servers added in a hub and spoke, using four commands, a text file, and some variables and aliases used to save my poor little nubbin fingers. Not impressed? If this were DFSRADMIN.EXE, it would take 406 commands to generate the same configuration. And if you used DFSMGMT.MSC, you’d have to navigate through this:

    image
    That was not fun

    With the underlying DFSR Windows PowerShell, you now have very easy scripting options to tie together cmdlets into basic “do everything for me with one command” functions, if you prefer. Here’s a simple example put together by our Windows PowerShell developer, Daniel Ong, that shows this off:

    Configure DFSR using Windows PowerShell

    It’s pretty nifty, check out this short demo video.

    (Please visit the site to view this video)

    (If you can't see the preview, go here: http://www.youtube.com/watch?v=N1SuGREIOTE)

    The sample is useable for simpler setup cases and also demonstrates (with plenty of comments!) exactly how to write your very own DFSR scripts.

    I find your lack of faith disturbing

    Still not convinced, eh? Ok, we’ve talked topology creation – now let’s see the ongoing management story.

    Let’s say I’m the owner of an existing set of replication groups and replicated folders scattered across dozens or hundreds of DFSR nodes throughout the domain. This is old stuff, first set up years ago when bandwidth was low and latency high. Consequently, there are custom DFSR replication schedules all over the connections and RGs. Now I finally have brand new modern circuits to all my branch offices and the need for weird schedules is past. I start to poke around in DFSMGMT and see that undoing all these little nuggets is going to be a real pain in the tuchus, as there are hundreds of customizations.

    What would DFSR Windows PowerShell do?

    Get-DfsrConnection -GroupName * | Set-DfsrConnectionSchedule -ScheduleType UseGroupSchedule

    Set-DfsrGroupSchedule -GroupName * -ScheduleType Always

    With those two simple lines, I just told DFSR to:

    1. Set all connections in all replication groups to use the replication group schedule instead of their custom connection schedules

    2. Then set all the replication group schedules to full bandwidth, open 24 hours a day, 7 days a week.

    Now that I have an updated schedule, I must wait for all the DFSR servers to poll active directory individually and pick up these changes, right? No! This can take up to an hour, and I have things do. I want them all to update right now:

    Get-DfsrMember -GroupName * | Update-DfsrConfigurationFromAD

    Oh baby! If I was still using DFSRDIAG.EXE POLLAD, I’d be on server 8 of 100 by the time that cmdlet returned from doing all of them.

    Since things are going so well, I think I’ll kick back and read some DFSR best practices info from Warren Williams. Hmmm. I should configure a larger staging quota in my software distribution environment, as these ISO and EXE files are huge and causing performance bottlenecks. According to the math, I need at least 32 GB of staging space on this replicated folder. Let’s make that happen:

    Get-DfsrMember -GroupName "rg01" | Set-DfsrMembership -FolderName "rf01" -StagingPathQuotaInMB (1024 * 32) -force

    That was painless – I don’t have to figure out the server names and I don’t have to whip out Calc to figure out that 32GB is 32,768 megabytes. This wildcarding and pipelining capability is powerful stuff in the right hands.

    It’s not all AD here, by the way – we greatly extended the ease of operations without the need for WMIC.EXE, DFSRDIAG.EXE, or crappy scripts made by Ned years ago. For instance, if you’re troubleshooting with Microsoft Support and they say, “I want you to turn up the DFSR debug logging verbosity and number of logs on all your servers”, you can now do this with a single easy command:

    Get-DfsrMember -GroupName * | Set-DfsrServiceConfiguration -DebugLogSeverity 5 -MaximumDebugLogFiles 1250

    Or what if I just set up replication and accidentally chose the empty folder as the primary copy, resulting in all my files moving into the hidden PreExisting folder, I can now easily move them back:

    Restore-DfsrPreservedFiles -Path "C:\RF01\DfsrPrivate\PreExistingManifest.xml" -RestoreToOrigin

    Dang, that hauls tail! Restore-DfsrPreservedFiles is so cool that it rates its own blog post (coming soon).

    I sense something; a presence I've not felt since…

    This new setup should be humming now – no schedule issues, big staging, no bottlenecks. Let’s see just how fast it is – I’ll create a series of propagation reports for all replicated folders in an RG, let it fan out overnight on all nodes, and then look at it in the morning:

    Start-DfsrPropagationTest -GroupName "rg01" -FolderName * -ReferenceComputerName srv01

    <snore, snore, snore>

    Write-DfsrPropagationReport -GroupName "rg01"-FolderName * -ReferenceComputerName srv01  -verbose

    Now I have as many propagation reports as I have RFs. I can scheduled this easily too which means I can have an ongoing, lightweight, and easily understood view of what replication performance is like in my environment. If I change –GroupName to use “*”, and I had a reference computer that lived everywhere (probably a hub), I can easily create propagation tests for the entire environment.

    While we’re on the subject of ongoing replication:

    Tell me the first 100 backlogged files and the count, for all RFs on this server, with crazy levels of detail:

    Get-DfsrBacklog -GroupName rg01 -FolderName * -SourceComputerName srv02 -DestinationComputerName srv01 -verbose

    image

    Whoa, less detail please:

    Get-DfsrBacklog -GroupName rg01 -FolderName * -SourceComputerName srv02 -DestinationComputerName srv01 -verbose | ft FullPathName

    image

    Seriously, I just want the count!

    (Get-DfsrBacklog -GroupName "RG01" -FolderName "RF01" -SourceComputerName SRV02 -DestinationComputerName SRV01 -Verbose 4>&1).Message.Split(':')[2]

    image
    Boing boing boing

    Tell me the files currently replicating or immediately queued on this server, sorted with on-the-wire files first:

    Get-DfsrState -ComputerName srv01 | Sort UpdateState -descending | ft path,inbound,UpdateState,SourceComputerName -auto -wrap

    image

    Compare a folder on two servers and tell me if all their immediate file and folder contents are identical and they are synchronized:

    net use x: \\Srv01\c$\Rf01

    Get-DfsrFileHash x:\* | Out-File c:\Srv01.txt

    net use x: /d

    net use x:
    \\Srv02\c$\Rf01

    Get-DfsrFileHash x:\* | Out-File c:\Srv02.txt

    net use x: /d

    Compare-Object -ReferenceObject (Get-Content C:\Srv01.txt) -DifferenceObject (Get-Content C:\Srv02.txt)

    image

    Tell me all the deleted or conflicted files on this server for this RF:

    Get-DfsrPreservedFiles -Path C:\rf01\DfsrPrivate\ConflictAndDeletedManifest.xml | ft preservedreason,path,PreservedName -auto

    image

    Wait, I meant for all RFs on that computer:

    Get-DfsrMembership -GroupName * -ComputerName srv01 | sort path | % { Get-DfsrPreservedFiles -Path ($_.contentpath + "\dfsrprivate\conflictanddeletedmanifest.xml") } | ft path,PreservedReason

    image

    Tell me every replicated folder for every server in every replication group in the whole domain with all their details, and I don’t want to type more than one command or parameter or use any pipelines or input files or anything! TELL ME!!!

    image

    I guess I got a bit excited there. You know how it is.

    These are the cmdlets you’re looking for

    For experienced DFSR administrators, here’s a breakout of the Dfsradmin.exe and Dfsrdiag.exe console applications to their new Windows PowerShell cmdlet equivalents. Look for the highlighted superscript notes for those that don’t have direct line-up.

    Legacy Tool Commands

    Windows PowerShell Cmdlet

    DFSRADMIN BULK

    No direct equivalent, this is implicit to Windows PowerShell

    DFSRADMIN CONN NEW

    Add-DfsrConnection

    DFSRADMIN CONN DELETE

    Remove-DfsrConnection

    DFSRADMIN CONN EXPORT

    No direct equivalent, use Get-DfsrConnectionSchedule1

    DFSRADMIN CONN IMPORT

    No direct equivalent, use Set-DfsrConnectionSchedule1

    DFSRADMIN CONN LIST

    Get-DfsrConnection

    DFSRADMIN CONN LIST SCHEDULE

    Get-DfsrConnectionSchedule

    DFSRADMIN CONN SET

    Set-DfsrConnection

    DFSRADMIN CONN SET SCHEDULE

    Set-DfsrConnectionSchedule

    DFSRADMIN HEALTH NEW

    Write-DfsrHealthReport

    DFSRADMIN MEM DELETE

    Remove-DfsrMember

    DFSRADMIN MEM LIST

    Get-DfsrMember

    DFSRADMIN MEM NEW

    Add-DfsrMember

    DFSRADMIN MEM SET

    Set-DfsrMember

    DFSRADMIN MEMBERSHIP DELETE

    No direct equivalent2

    DFSRADMIN MEMBERSHIP LIST

    Get-DfsrMembership

    DFSRADMIN MEMBERSHIP SET

    Set-DfsrMembership

    DFSRADMIN MEMBERSHIP NEW

    No direct equivalent, use Set-DfsrMembership3

    DFSRADMIN PROPREP NEW

    Write-DfsrPropagationReport

    DFSRADMIN PROPTEST CLEAN

    Remove-DfsrPropagationTestFile

    DFSRADMIN PROPTEST NEW

    Start-DfsrPropagationTest

    DFSRADMIN RF DELETE

    Remove-DfsReplicatedFolder

    DFSRADMIN RF LIST

    Get-DfsReplicatedFolder

    DFSRADMIN RF NEW

    New-DfsReplicatedFolder

    DFSRADMIN RF SET

    Set-DfsReplicatedFolder

    DFSRADMIN RG DELETE

    Remove-DfsReplicationGroup

    DFSRADMIN RG IMPORT

    No direct equivalent, use Get-DfsrGroupSchedule1

    DFSRADMIN RG EXPORT

    No direct equivalent, use Set-DfsrGroupSchedule1

    DFSRADMIN RG LIST

    Get-DfsReplicationGroup

    DFSRADMIN RG LIST SCHEDULE

    Get-DfsrGroupSchedule

    DFSRADMIN RG NEW

    New-DfsReplicationGroup

    DFSRADMIN RG SET

    Set-DfsReplicationGroup

    DFSRADMIN RG SET SCHEDULE

    Set-DfsrGroupSchedule

    DFSRADMIN RG DELEGATE

    No direct equivalent, use Set-Acl4

    DFSRADMIN SUB LIST

    No direct equivalent5

    DFSRADMIN SUB DELETE

    No direct equivalent5

    DFSRDIAG BACKLOG

    Get-DfsrBacklog

    DFSRDIAG DUMPADCFG

    No direct equivalent, use Get-AdObject6

    DFSRDIAG DUMPMACHINECONFIG

    Get-DfsrServiceConfiguration

    DFSRDIAG FILEHASH

    Get-DfsrFileHash

    DFSRDIAG GUID2NAME

    ConvertFrom-DfsrGuid

    DFSRDIAG IDRECORD

    Get-DfsrIdRecord

    DFSRDIAG POLLAD

    Update-DfsrConfigurationFromAD

    DFSRDIAG PROPAGATIONREPORT

    Write-DfsrPropagationReport

    DFSRDIAG PROPAGATIONTEST

    Start-DfsrPropagationTest

    DFSRDIAG REPLICATIONSTATE

    Get-DfsrState

    DFSRDIAG STATICRPC

    Set-DfsrServiceConfiguration

    DFSRDIAG STOPNOW

    Suspend-DfsReplicationGroup

    DFSRDIAG SYNCNOW

    Sync-DfsReplicationGroup

    No equivalent7

    Get-DfsrPreservedFiles

    No equivalent7

    Restore-DfsrPreservedFiles

    No equivalent8

    Get-DfsrCloneState

    No equivalent8

    Import-DfsrClone

    No equivalent8

    Reset-DfsrCloneState

    No equivalent8

    Export-DfsrClone

    No equivalent9

    Set-DfsrServiceConfiguration

    1 Mainly because they were pretty dumb and we found no one using them. However, you can export the values using Get-DfsrConnectionSchedule or Get-DfsrGroupSchedule and pipeline them with Out-File or Export-CSV. Then you can use Get-Content or Import-CSV to import them with Set-DfsrConnectionSchedule or Get-DfsrGroupSchedule.

    2 Paradoxically, these old commands leaves servers in a non-recommended state. To remove memberships from replication altogether in an RG, use Remove-DfsrMember (this is the preferred method). To remove a server from a specific membership but leave them in an RG, set their membership state to disabled using Set-DfsrMembership –DisableMembership $true.

    3 DFSR Windows PowerShell implements DFSRADMIN MEMBERSHIP NEW implicitly via the New-DfsReplicatedFolder cmdlet, which removes the need to create a new membership then populate it.

    4 You can use the Get-Acl and Set-Acl cmdlets in tandem with the Get-AdObject Active Directory cmdlet to configure delegation on the RG objects. Or just keep using the old tool, I suppose.

    5 The DFSRADMIN SUB DELETE command was only necessary because of the non-recommended DFSRADMIN MEMBERSHIP DELETE command. To remove DFSR memberships in a supported and recommended fashion, see note 2 above.

    6 Use the Get-AdObject Active Directory cmdlet against the DFSR objects in AD to retrieve this information (with considerably more details).

    7 The legacy DFSR administration tools do not have the capability to list or restore preserved files from the ConflictAndDeleted folder and the PreExisting folder. Windows Server 2012 R2 introduced these capabilities for the first time as in-box options via Windows PowerShell.

    8 The legacy DFSR administration tools do not have the capability to clone databases. Windows Server 2012 R2 introduced these capabilities for the first time as in-box options via Windows PowerShell.

    9 The legacy DFSR administration tools do not have the full capabilities of Set-DfsrServiceConfiguration. Administrators instead had to make direct WMI calls via WMIC or Get-WmiObject/Invoke-WmiMethod. These included the options to configure debug logging on or off, maximum debug log files, debug log verbosity, maximum debug log messages, dirty shutdown autorecovery behavior, staging folder high and low watermarks, conflict folder high and low watermarks, and purging the ConflictAndDeleted folder. These are all now implemented directly in the new cmdlet.

    Give yourself to the Dark Side

    It’s the age of Windows PowerShell, folks. The old DFSR tools are relic of a bygone era and the main limit now is your imagination. Once you look through the DFSR Windows PowerShell online or downloadable help, you’ll find that we gave you 82 examples just to get your juices flowing here. From those, I hope you end up creating perfectly tailored solutions to all your day-to-day DFSR administrative needs.

    - Ned “No. I am your father!” Pyle

    DFS Replication Initial Sync in Windows Server 2012 R2: Attack of the Clones

    $
    0
    0

    Hi folks, Ned here again. By now, you know that DFS Replication has some major new features in Windows Server 2012 R2. Today I talk about one of the most radical: DFSR database cloning.

    Prepare for a long post, this has a walkthrough…

    The old ways are not always the best

    DFSR – or any proper file replication technology - spends a great deal of time validating that servers have the same knowledge. This is critical to safe and reliable replication; if a server doesn’t know everything about a file, it can’t tell its partner about that file. In DFSR, we often refer to this “initial build” and “initial sync” processing as “initial replication”. DFSR needs to grovel files and folders, record their information in a database on that volume, exchange that information between nodes, stage files and create hashes, then transmit that data over the network. Even if you preseed the files on each server before configuring replication, the metadata transmissions are still necessary. Each server performs this initial build process locally, and then the non-authoritative server checks his work against an authoritative copy and reconciles the differences.

    This process is necessarily very expensive. Heaps of local IO, oodles of network conversation, tons of serialized exchanges based on directory structures. As you add bigger and more complex datasets, initial replication gets slower. A replicated folder that contains tens of millions of preseeded files can take weeks to synchronize the databases, even with preseeding stopping the need to send the actual files.

    Furthermore, there are times when you need to recreate replication of previously synchronized data, such as when:

    1. Upgrading operating systems

    2. Replacing computer hardware

    3. Recovering from a disaster

    4. Redesigning the replication topology

    Any one of these requires re-running initial replication on at least one node. This has been the way of DFSR since Microsoft introduced it in Windows Server 2003 R2.

    Cutting out the middle man

    DFSR database cloning is an optional alternative to so-called classic initial replication. By providing each downstream server with an exported copy of the upstream server’s database and preseeded files, DFSR reduces or eliminates the need for over-the-wire metadata exchange. DFSR database cloning also provides multiple file validation levels to ensure reconciliation of files added, modified, or deleted after the database export but before the database import. After file validation, initial sync is now instantaneous if there are no differences. If there are differences, DFSR only has to synchronize the delta of changes as part of a shortened initial sync process.

    We are talking about fundamental, state of the art performance improvements here, folks. To steal from my previous post, let’s compare a test run with ~10 terabytes of data in a single volume comprising 14,000,000 preseeded files:

    “Classic” initial sync

    Time to convergence

    Preseeded

    ~24 days

    Now, with DB cloning:

    Validation Level

    Time to export

    Time to import

    Improvement %

    2 – Full

    9 days, 0 hours

    5 days, 10 hours

    40%

    1 – Basic

    2 hours, 48 minutes

    9 hours, 37 minutes

    98%

    0 – None

    1 hour, 13 minutes

    6 hours, 8 minutes

    99%

    I think we can actually do better than this – we found out recently that we’re having some CPU underperformance in our test hardware. I may be able to re-post even better numbers someday.

    For instance, here I created exactly one million files and cloned that volume, using VMs running on a 3 year old test server.

    The export:

    image

    The import:

    image

    That’s just over 12 minutes total. It’s awesome, like a grizzly bear that shoots lasers from its eyeballs. Yes, I own of these shirts and I am not ashamed.

    At a high level

    Let’s examine the mainline case of creating a new replication topology using DB cloning:

    1. You createa replication group and a replicated folder, then add a server as a member of that topology (but no partners, yet). This will be the “upstream” (source) server.

    2. You let initial build complete

    3. You export the cloned database from the upstream server

    4. You preseed the files to the downstream (destination) server and copy in the exported clone DB files

    5. You import the cloned database on the downstream server

    6. You add the downstream server to the replication group and RF membership, just like classic DFSR

    7. You let the initial sync validation complete

    If you did everything right, step 7 is done instantly, and the server is now replicating normally for all further file additions, modifications, and deletions. It’s straightforward stuff, with only a handful of steps.

    Walkthrough

    Let’s get some hands-on with DB cloning. Below is a walkthrough using the new DFSR Windows PowerShell module and the mainstream “setting up a new replication topology” scenario.

    Requirements and sample setup

    • Active Directory domain with at least one domain controller (does not need to run Windows Server 2012 R2)
    • AD schema updated to at least Windows Server 2008 R2 (there are no forest or domain functional level requirements)
    • Two file servers running Windows Server 2012 R2 and joined to the domain (Windows Server 2012 and earlier file servers cannot participate in cloning scenarios, but do support replication with Windows Server 2012 R2)

    You can use virtualized DFSR servers or physical ones; it makes no difference. This walkthrough uses the following domain environment as an example:

    • One domain controller
    • Two member servers, named SRV01 and SRV02

    Configure DFSR

    To configure the DFSR role on SRV01 and SRV02 using Windows PowerShell, run the following command on each server:

    Install-WindowsFeature –Name Fs-Dfs-Replication -IncludeManagementTools

    Alternatively, to configure the DFSR role using Server Manager:

    1. Start Server Manager.

    2. Click Manage, and then click Add Roles and Features.

    3. Proceed to the Server Roles page, then select DFS Replication, leave the default option to install the Remote Server Administration Tools selected, and continue to the end.

    Configure volumes

    On SRV01 and SRV02, configure an F:, G:, and H: drive with NTFS. Each drive should be at least 2GB in size. If your test servers do not already have these drives configured or don’t have additional disks, you can shrink the existing C: volume with Resize-Partition, DiskMgmt.Msc, or Diskpart.exe, and then format the new volumes. Multiple drives allows you test cloning multiple times without starting over too often – remember, DFSR databases are per-volume, and therefore cloning is as well.

    image

    For example, using Windows PowerShell with a virtual machine that has one 40GB disk and C: volume:

    Get-Partition | Format-Table -auto

    Resize-Partition -DiskNumber 0 -PartitionNumber 1 -Size 33GB

    New-Partition -DiskNumber 0 –Size 2GB -DriveLetter f | Format-Volume

    New-Partition -DiskNumber 0 –Size 2GB -DriveLetter g | Format-Volume

    New-Partition -DiskNumber 0 –Size 2GB -DriveLetter h | Format-Volume

     

    Clone a DFSR database

    1. On the upstream server SRV01 only, create H:\RF01 and create or copy in some test files (such as by copying the 2,000 largest immediate file contents of the C:\Windows\SysWow64 folder).

    Important:Windows Server 2012 R2 Preview contains a bug that restricts cloning to under 3,100 files and folders – if you add more files, cloning export never completes. Ehhh, sorry: we fixed this issue before Preview shipped but even then it was too late due to the build’s age. Do not attempt to clone more than 3,100 files while using the Preview version of Windows Server 2012 R2 Basic validation. If you want to use more files, use –Validation None. The RTM version of DFSR DB cloning will not have this limitation.

    Use the New-DfsReplicationGroup, New-DfsReplicatedFolder, Add-DfsrMember, and Set-DfsrMembership cmdlets to create a replicated folder and membership for SRV01 only, using only the H:\RF01 directory replicated folder. You must specify PrimaryMember as $True for this membership, so that the server performs initial build with no need for partners. You can run these commands on any server.

    Note: Do not add SRV02 as a member nor create a connection between the servers in this new RG. We don’t want that server starting classic replication.

    Sample:

    New-DfsReplicationGroup -GroupName "RG01"

    New-DfsReplicatedFolder -GroupName "RG01" -FolderName "RF01"

    Add-DfsrMember -GroupName "RG01" -ComputerName SRV01

    Set-DfsrMembership -GroupName "RG01" -FolderName "RF01" -ContentPath "H:\RF01" -ComputerName SRV01 -PrimaryMember $True

    Update-DfsrConfigurationFromAD –ComputerName SRV01

    Note the sample output below and how I used the built-in –Verbose parameter to see more AD polling details:

    image

    image

    2. Wait for a DFS Replication Event 4112 in the DFS Replication Event Log, which indicates that the replication folder initialized successfully as primary.

    Note below in the sample output how I have a 6020 event; in a cloning scenario, it is expected and supported, unlike the implied messaging.

    image

    3. Export the cloned database and volume config XML for the H: drive. Export requires the output folder for the database and configuration XML file already exist. It also requires that no replicated folders on that volume be in an initial build or initial sync phase of processing.

    Sample:

    New-Item -Path "H:\Dfsrclone" -Type Directory

    Export-DfsrClone -Volume H: -Path "H:\Dfsrclone"

    Note the use of the –Validation parameter in the sample out below. Cloning provides three levels of file validation during the export and import processing. These ensure that if you are allowing users to alter data on the upstream server while cloning is occurring, files are later reconciled on the downstream.

    • None - No validation of files on source or destination server. Fastest and most optimistic. Requires that you preseed data perfectly. Any modification of data during the clone processing on the servers will not be detected or replicated until it is later modified after cloning.
    • Basic - (Default behavior, Microsoft recommended). Each file’s existing database record is updated with a hash of the ACL, the file size, and he last modified time. Good mix of fidelity and performance. This is the recommended validation level, and the maximum one you should use if you are replicating more than 10TB of data. Yes, we are going to support much more than 10TB and 11M files in WS2012 R2 as long as you use cloning; we’ll give you an official number at RTM.
    • Full - Same hashing mechanism used by DFSR during normal operations. Hash stored in database record for each file. Slowest but highest fidelity. If you exceed 10TB, we do not recommend using this value due to the comparatively poor performance.

    We recommend that you do not allow users to add, modify, or delete files on the source server as this makes cloning less effective, but we realize you live in the real world. Hence, the validation code.

    Important:You should not let users modify or access files on the downstream (destination) server until cloning completes end-to-end and replication is working. This is no different from our normal “classic” initial sync replication best practice for the past 8 years of DFSR, as there is a high likelihood that users will lose their changes through conflict resolution or movement to the preexisting files store. If users need to access these files and make changes, only allow them to access the original source server from which you exported.

    image

    Note the hint outputs above. The export cmdlet shows a suggested copy command for the database export folder. It also suggests preseeding hints for any replicated folders on that volume that will clone. All you have to do is fill in your destination server name and RF path.

    4. Wait for a DFS Replication Event 2402 in the DFS Replication Event Log, which indicates that the export completed successfully. As you can see from the sample outputs, there are four event IDs of note when exporting: 2406, 2410 (there may be many of these, they are progress indicators), 2402, and finally 2002 (which brings the volume back online for normal replication).

    image

    As you can see from my example, I cloned more than 3,100 files. I told you we fixed it already!

    5. Preseed the file and folder data from the source computer to the new downstream computer that will clone the DFS Replication database.

    Important:There should be no existing replicated folder content (folders, files, or database) on the downstream server's volume that will perform cloning – let the preseeding fill it all in in this mainstream scenario. Microsoft recommends that you do not create network shares to the data until completion of cloning and do not allow users to add, modify, or change files on the downstream server until post-initial replication is operational.

    Sample preseeding command hint:

    Robocopy.exe "H:\RF01" "\\SRV02\H$\RF01" /E /B /COPYALL /R:6 /W:5 /MT:64 /XD DfsrPrivate /TEE /LOG+:preseed.log

    Important:Do not use the robocopy /MIR option on the root of a volume, do not manually create the replicated folder on the downstream server, and do not run robocopy files that you already copied previously (i.e. if you have to start over, delete the destination folder and file structure and really start over). Let robocopy create all folders and copy all contents to the downstream server, via the /e /b /copyall options, every time you run it. Otherwise, you are very likely to end up with hash mismatches.

    Robocopy can be a bit… finicky.

    clip_image001

    clip_image002

    6. Copy the contents of the exported folder, both the database and xml, to the downstream server and save them in a temporary folder on the volume that contains the populated file data.

    Sample database file copy command:

    Robocopy.exe "H:\Dfsrclone" "\\SRV02\h$\dfsrclone" /B

    image

    7. On the downstream server SRV02, ensure that you correctly performed preseeding by using the Get-DfsrFileHash cmdlet to spot-check folders and files, and then compare to the upstream copies.

    This sample shows hashes for all the files beginning with “p”:

    PS C:\> Get-DfsrFileHash \\SRV01\H$\RF01\pri*

    PS C:\> Get-DfsrFileHash "\\SRV02\H$\RF01\pri*"

    Sample output showing an easy “eyeball comparison”

    image

    I recommend you run this on multiple small file subsets and at a few subfolder levels. There are many other examples of using the new Get-DfsrFileHash cmdlet here on TechNet already, including using the compare-object cmdlet to get fancy-schmancy.

    8. Ensure that the System Volume Information\DFSR folder does not exist on this downstream SRV02 server H: drive.

    Important: Naturally, this server must not already be participating in replication on that volume and if it is, you cannot clone into it.

    Sample (note: you may need to stop the DFSR service, run this, and then start the DFSR service):

    RD 'H:\System Volume Information\DFSR' –Recurse -Force

    Note: When re-using existing files that were previously replicated, you are likely to run into some benign errors when running this command due the MAX_PATH limitations of RD, where some of the Staging folder contents will be too long to delete. You can ignore those warnings, or if you want to clean out the folder completely, you can use this workaround:

    A. Create an empty folder on H: called "H:\empty"

    B. Run the following command:

    robocopy h:\empty "h:\system volume information\dfsr" /MIR

    C. Delete the now empty “system volume information\DFSR” folder after the robocopy command completes.

    9. Import the cloned database on SRV02. For example:

    Import-DfsrClone -Volume H: -Path "H:\Dfsrclone"

    image

    10. Wait for a DFSR Informational Event 2404 in the DFS Replication Event Log, which indicates that the import completed successfully. As you can see from the sample outputs, there are four event IDs of note when importing: 2412, 2416 (there may be many of these, they are progress indicators), 2418, and finally 2404.

    image

    11. Add the downstream SRV02 server as a member of the replication group using Add-DfsrMember, set its membership using Set-DfsrMembership for the -ContentPath matching H:\rf01, and create bi-directional replication connections between the upstream and downstream servers using Add-DfsrConnection.

    Add-DfsrMember -GroupName "rg01" -ComputerName srv02

    Add-DfsrConnection -GroupName "rg01" -SourceComputerName srv01 -DestinationComputerName srv02

    Set-DfsrMembership -GroupName "rg01" -FolderName "rf01" -ComputerName srv02 -ContentPath "h:\rf01" 

    Update-DfsrConfigurationFromAD srv01,srv02

    Note in the sample output how I use Get-DfsrMember in a pipeline to force AD polling operations on all members in the RG01 replication group, instead of having to run for each server. Imagine how much easier this will make administering environments with dozens or hundreds of DFSR nodes.

    image

    image

    12. Wait for the DFSR informational event 4104, which indicates that the server is now normally replicating files. Unlike your previous experience, there will not be a preceding 4102 even when enabling replication of a cloned volume. If there are any changed files on the upstream server since you performed export cloning, those files will replicate inbound to the downstream server authoritatively and you will see 4412 conflict events. If you allowed users to modify data on the downstream server – and again, you shouldn’t - while cloning operations were ongoing, those files will conflict (and lose) or move to the preexisting folder, and any files the user had deleted will replicated back in again from the upstream. This is identical to classic initial sync behavior.

    Cheat sheet

    Now that you have tried out the controlled scenario once, here is a cut down “quick steps” version you can use for further testing with those F: and G: drives on your own; once you use those up, you will need to remove the server from replication for those volumes in order to try some more experimentation with things like a 3rd server or cloning from an existing replicated folder.

    In this case, I am using the F: drive with its RF02 replicated folder in the RG02 replication group. Keep in mind – you don’t have to keep creating new RGs and we support cloning multiple custom writable RFs on a volume. These are just simplified walkthroughs, after all.

    On the upstream SRV01 server:

    New-DfsReplicationGroup "RG02" | New-DfsReplicatedFolder -FolderName "RF02" | Add-DfsrMember -ComputerName SRV01

    Set-DfsrMembership –GroupName "RG02" -ComputerName SRV01 -ContentPath F:\Rf02 -PrimaryMember $True -FolderName "RF02"

    Update-DfsrConfigurationFromAD

    Get-WinEvent “Dfs replication” MaxEvents 10 | fl

    New-Item -Path "f:\Dfsrclone" -Type Directory

    Export-DfsrClone -Volume f: -Path "f:\Dfsrclone"

    Robocopy.exe "F:\RF02" "\\SRV02\F$\RF02" /E /B /COPYALL /R:6 /W:5 /MT:64 /XD DfsrPrivate /TEE /LOG+:preseed.log

    Robocopy.exe f:\Dfsrclone \\srv02\f$\Dfsrclone

    On the downstream SRV02 server (note: you may need to stop the DFSR service to perform the first step; be sure to start it up again so that you can run the import)

    RD "H:\System Volume Information\DFSR" –Force -Recurse

    Import-DfsrClone -Volume C: -Path "f:\Dfsrclone"

    Get-WinEvent “Dfs replication” MaxEvents 10 | fl

    Add-DfsrMember -GroupName "RG02" -ComputerName "SRV02" | Set-DfsrMembership -FolderName "RF02" -ContentPath "f:\Rf02"

    Add-DfsrConnection -GroupName "RG02" -SourceComputerName "SRV01" -DestinationComputerName "SRV02"

    Update-DfsrConfigurationFromAD SRV01,SRV02

    Get-WinEvent “Dfs replication” MaxEvents 10 | fl

    Some simple troubleshooting

    While the future TechNet content on DB cloning contains a complete troubleshooting section, here are some common issues seen by first-time users of this new feature:

    Symptom

    Export-DfsrClone does not show RootFolderPath or PreseedingHint output for SYSVOL or read-only replicated folders. After running Import-DfsrClone, SYSVOL and read-only replicated folders are not imported.

    Cause

    DFSR cloning does not support SYSVOL or read-only replicated folders in Windows Server 2012 R2. Those folders are skipped by cloning. This behavior is by design.

    Resolution

    Configure replication of read-only replicated folders using classic initial sync. Configure SYSVOL by promoting domain controllers normally.

     

    Symptom

    Export-DfsrClone does not show RootFolderPath and PreseedingHint output for one or more custom replicated folders. After running Import-DfsrClone, not all custom replicated folders are imported.

    Cause

    DFSR cloning does not support replicated folders that are currently in initial sync or initial building. Those replicated folders are skipped by cloning.

    Resolution

    Ensure that all replicated folders on a volume are in a normal, non-initial building, non-initial synchronizing state. Any replicated folders that did not get DFSR event 4112 (primary server) after initial build started, or event 4104 (non-primary server) after initial sync completed, are not capable of cloning yet. If your event logs have wrapped, you can use WMI to determine if a replicated folder is ready to clone:

     

    PS C:\> Get-WmiObject -Namespace "root\Microsoft\Windows\DFSR" -Class msft_dfsrreplicatedfolderinfo -ComputerName <some server> | ft replicatedfoldername,state -auto –wrap

     

    Symptom

    Import-DfsrClone fails with errors: “Import-DfsrClone : Could not import the database clone for the volume h: to "H:\dfsrclone". Confirm that you are running in an elevated Windows PowerShell

    session, the DFSR service is running, and that you are a member of the local Administrators group. Error code: 0x80131500. Details: The WS-Management service cannot process the request. The WMI service or the WMI provider returned an unknown error: HRESULT 0x80041001”

    Cause

    You do not preseed the replicated folders onto the destination volume with the same name and relative path.

    Resolution

    Ensure that you preseed the source replicated folders onto the destination volume using the same folder names and relative paths (i.e. if the source replicated folder was on “d:\dfsr\rf01”, the destination volume must contain <volume>:\dfsr\rf01”

     

    Symptom

    DFSR event 2418 shows a significant mismatch count. Cloning takes as long as classic non-preseeded initial sync.

    Cause

    Files were not preseeded onto the destination server correctly or at all.

    Resolution

    Validate your preseeding technique and results. Reattempt the export and import process.

     

    Symptom

    Export-DfsrClone never completes or returns any output when using–Validation Basic or not specifying -Validation.

    Cause

    Code defect in Windows Server 2012 R2 Preview build only, when cloning more than 3100 files on a volume.

    Resolution

    This is a known issue in the Preview build. This was resolved in later builds. As a workaround, limit the number of files replicated with basic validation to under 3,100 per volume. If you wish to see the cloning performance with a larger dataset, use 3,100 much larger sample files (such as ISO, CAB, MSI, VHD, or VHDX files). Alternatively, use validation level none (0) instead of basic.

    Where you can learn more

    We have a comprehensive cloning walkthrough available on TechNet and a preseeding article on the way, as well as updates to the DFSR FAQ. These include steps on cloning an existing replica, dealing with hub servers that have many unique replicated folders from branch offices, using cloning to recover a corrupted database, and replacing or upgrading servers. Not to mention the new supported DFSR size limits!

    - Ned “hmmm, I didn’t make any Star Wars references after all” Pyle

    DFS Replication in Windows Server 2012 R2: Restoring Conflicted, Deleted and PreExisting files with Windows PowerShell

    $
    0
    0

    Hi folks, Ned here again. Today I talk about a new feature in Windows Server 2012 R2 DFSR, restoring preserved files. DFSR has had conflict, deletion, and preexisting file handling since its release, but until now, we required out of band tools to recover those files. Those days are done: we now have Get-DfsrPreservedFiles and Restore-DfsrPreservedFiles.   

    Before we begin

    Readers of my posts on AskDS and FileCab know I like to take the edge off. I figure everyone should learn we aren’t a corporate hive mind, just a collection of humans striving to make great software. For now though, I’ll keep the cheap laughs to a minimum, as when you’re restoring data it’s usually causing some hair-pulling and tears.

    Let’s get to it!

    What are preserved files?

    DFSR uses a set of conflict-handling algorithms during initial sync and ongoing replication to ensure that the appropriate files replicate between servers, as well as to preserve remote deletions.

    1. Conflicts - During non-authoritative initial sync, cloning, or ongoing replication, losing files with the same name and path modified on multiple servers move to: <rf>\Dfsrprivate\ConflictAndDeleted
    2. PreExisting - During initial sync or cloning, files with the same name and path that exist only on the downstream server move to: <rf>\Dfsrprivate\PreExisting
    3. Deletions - During ongoing replication, losing files deleted on a server move to the following folder on all other servers: <rf>\Dfsrprivate\ConflictAndDeleted

    image

    The ConflictAndDeleted folder has a 4GB “first in/first out” quota in Windows Server 2012 R2 (it’s 660MB in older operating systems). The PreExisting folder has no quota.

    When content moves to these folders, DFSR tracks it in the ConflictAndDeletedManifest.xml and PreExistingManifest.xml. DFSR mangles all files and folders in the ConflictAndDeleted folder with version vector information to preserve uniqueness. DFSR also mangles the top-level files and folders in the PreExisting folder, but leaves all the subfolder contents unaltered.

    The result is that it can be difficult to recover data, because much of it has heavily obfuscated names and paths. While you can use the XML files to recover the data on an individual basis, this doesn’t scale. Moreover, if you just set up replication and accidentally chose the empty replicated folder as primary, you want all those files back out of preexisting immediately with a minimum amount of fuss.

    How to create some preserved files

    In this walkthrough, I create a replicated folder that intentionally contains conflicted, deleted, and preexisting content for restoration testing. To follow along, you need a couple of Windows Server 2012 R2 computers with DFSR installed: let’s call them SRV01 and SRV02.

    1. On server SRV02 only, create C:\RF01 and add some test files (such as all the contents of the C:\Windows\SYSWOW64 folder). Make sure it has some subfolders with files in it.

    2. Create a new replication group named RG01 with a single replicated folder named RF01. Add SRV01 and SRV02 as members to the RG and replicate the C:\RF01 directory replicated folder. You must specify SRV01 the primary (authoritative) server in this case. I recommend the new DFSR Windows PowerShell for this, so all my examples below go that route.

    Important:Choosing SRV01 to be primary is an “intentional mistake” in this scenario, as I want to create preexisting files on the downstream server. Ordinarily, you would make SRV02 primary, as it contains all the data to replicate and SRV01 contains none.

    New-DfsReplicationGroup -GroupName "RG01" | New-DfsReplicatedFolder -FolderName "RF01" | Add-DfsrMember -ComputerName SRV01,SRV02

    Add-DfsrConnection -GroupName "RG01" -SourceComputerName srv01 -DestinationComputerName srv02

    Set-DfsrMembership -GroupName "rg01" -FolderName "rf01" -ComputerName srv01 -ContentPath c:\rf01 –PrimaryMember $true

    Set-DfsrMembership -GroupName "rg01" -FolderName "rf01" -ComputerName srv02 -ContentPath c:\rf01

    Update-DfsrConfigurationFromAD srv01,srv02

    3. Verify that replication completes – in this case, SRV01 logs DFSR event 4112 and SRV02 logs DFSR event 4104. All files in c:\rf01 will appear to vanish from SRV02.

    4. Create a test BMP and RTF file on SRV01in C:\RF01. Create a subfolder, and then create another BMP and RTF test file in that subfolder. Make sure those files replicate.
    Note: BMP and RTF are convenient choices because they are default file creation options in the Windows Explorer shell. In addition, unlike Notepad with TXT files, their editors follow standard conventions for opening and closing files.

    image

    5. Pause replication between SRV01 and SRV02 using the Suspend-DfsReplicationGroup cmdlet. For instance, to pause for 5 minutes:

    Suspend-DfsReplicationGroup -GroupName rg01 -SourceComputerName srv01 -DestinationComputerName srv02 -DurationInMinutes 5

    Suspend-DfsReplicationGroup -GroupName rg01 -SourceComputerName srv02 -DestinationComputerName srv01 -DurationInMinutes 5

    6. Modify the top-level BMP file on both servers (the same file), making sure to modify the SRV02 copy first (i.e. earlier, so that it will lose the conflict).

    7. Let replication resume from the suspension in step 5.

    8. Create another BMP file on SRV01, and then delete that file and the subfolder you created in step 4, along with its contents.

    9. Validate that DFSR deletes the files from both servers that the file you modified on both servers now holds the version that came from SRV01.

    10. On SRV01, manually recreate the same-named file that you previously deleted in step 8, ensure it replicates to SRV02, then delete it from SRV01 and verify that the deletion replicates. This creates a scenario with multiple deleted file versions.

    11. Copy the following folders and files elsewhere as a backup on SRV02:

    C:\RF01\DfsrPrivate\ConflictAndDeleted

    C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml

    C:\RF01\DfsrPrivate\PreExisting

    C:\RF01\DfsrPrivate\PreExistingManifest.xml

    For example:

    Robocopy.exe c:\rf01\dfsrprivate c:\rf01backup\dfsrprivate /mt:64 /b /e /xd staging /copy:dt

    Note:Restoring preserved files does not require running the DFSR service or an active RG/RF. You can operate on the preserved data independently on Windows Server 2012 R2, including locally on a Windows 8.1 Preview computer running RSAT with a local copy of the DfsrPrivate folder. Backing up the preserved files prior to restoration is a best practice, since restore operations are destructive by default (they move files instead of copying).

    Now I have some conflicts, some deletions, and some preexisting files and folders, all on SRV02 in the c:\rf01\dfsrprivate folder. Let’s go to work.


    image
       image

    image

    How to inventory preserved files

    The Get-DfsrPreservedFiles cmdlet tells you everything you want to know about files and folders in the ConflictAndDeleted and PreExisting stores, according to the XML manifests. Let’s inventory the preserved files, in order to see which files DFSR saved and some details about them. All it needs is a single parameter called–path that points to the manifest.

    1. On SRV02, see the conflicted and deleted files:

    Get-DfsrPreservedFiles –Path C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml

    image 

    2. On SRV02, see the preexisting files:

    Get-DfsrPreservedFiles -Path C:\RF01\DfsrPrivate\PreExistingManifest.xml

    image

    3. On SRV02, retrieve only RTF files with their original path and new names:

    Get-DfsrPreservedFiles –Path C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml | Where-Object path -like *.rtf | ft path,preservedname

    image

    4. On SRV02, see only conflicted files, when the conflict occurred, and what server originated the conflict:

    Get-DfsrPreservedFiles –Path C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml | Where-Object PreservedReason -like UpdateConflict | ft path,preservedtime,UID -auto 

    ConvertFrom-DfsrGuid -Guid 40A4EEBF-110B-4F40-990C-B5ADBCA97725 -GroupName rg01

    image

    No longer are you left wondering when a user deleted a file or from which server, nor if a particular file is still in the ConflictAndDeleted cache on the other nodes. It’s all there at your fingertips.

    How to restore preserved files

    The Restore-DfsrPreservedFiles cmdlet can restore one or more files and folders in the ConflictAndDeleted and PreExisting stores, according to the XML manifests. All it needs is the manifest and a decision about how to recover the files and where to put them, but there are some additional options:

    • -Path - path of a manifest XML file in the replicated folder
    • -RestoreToPath or -RestoreToOrigin– should the data restore to a new arbitrary path or to its original path
    • -RestoreAllVersions(optional)– should all versions of conflicted and deleted files restore with an appended time-date stamp, or just the latest version. Default is FALSE, so only latest files restore
    • -CopyFiles(optional)– should the files move or copy. Default is FALSE, so files move.
    • -AllowClobber(optional)– should restore overwrite existing files in the destination

    Important: By default, this cmdlet moves all files preserved files from PreExisting. It also moves the latest version of files from ConflictAndDeleted and removes the remaining ones (this is intentional, as otherwise every time you ran this cmdlet, you would restore an increasingly older version of conflicted files). We strongly recommend backing up the ConflictsAndDeleted and Preexisting folder before you use this cmdlet!

    Let’s look at some examples of recovery.

    Note: some testing can “break” the test environment we created above, because there will no longer be any files to restore once you move them. I recommend you make a few backup copies of your saved DfsrPrivate folder so you can go through this a few times.

    1. On SRV02, move all preexisting files to their original location:

    Restore-DfsrPreservedFiles -Path "C:\RF01\DfsrPrivate\PreExistingManifest.xml" –RestoreToOrigin

    image

    This is very quick, as I’m moving the data locally on the same volume. Look here when I restore ~1GB and 4,500 files and folders of SYSWOW64:

    image

    I am back in business in 2.3 seconds later, here.

    2. On SRV02, copy all conflicted and deleted files to the original location, preserving all versions of the files so that users can decide which ones to keep:

    Restore-DfsrPreservedFiles -Path "C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml" -RestoreToOrigin -RestoreAllVersions -CopyFiles

    image

    3. On SRV02, move all versions of the preserved files verbosely to an alternate location for later analysis and manual restore:

    Restore-DfsrPreservedFiles -Path "C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml" –RestoreToPath –RestoreAllVersions –Force

    image

    Note: an unintended artifact of this type of “alternate location” operation is the creation of an empty set of folders based on the RF itself, but at one level deeper. You can ignore or delete that top folder named after the RF itself; it will contain only empty folders. This is something we might fix later, but it is totally benign and cosmetic.

    4. On SRV02, move the newest version of all conflicted and deleted files to the original location, removing all versions of moved files from the ConflictAndDeleted folder, skipping any files that already exist and leaving them in the ConflictAndDeleted folder:

    Restore-DfsrPreservedFiles -Path "C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml" -RestoreToOrigin

    image

    As you can tell, these cmdlets are very simple to use and can meet most data restoration needs. Now if Tim from Sales accidentally deletes his shared document that he started this morning – and is therefore not in the nightly backup – you have one more way to get it back.

    I hope this is another good example of why you should move to Windows Server 2012 R2.

    - Ned “self-preservation” Pyle

    Going to TechEd Australia?

    Viewing all 268 articles
    Browse latest View live