This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog. Today’s blog post covers DFSR and how it applies to the larger topic of “Transform the Datacenter.” To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: IaaS Innovations.”
Hi folks, Ned here again. You might have read my post on DFSR changes in Windows Server 2012 back in November and thought to yourself, “This is ok, but come on… this took three years? I expected more.”
We agreed.
Background
Windows Server 2012 R2 adds substantial features to DFSR in order to bring it in line with modern file replication scenarios for IT pros and information workers on enterprise networks. These include database cloning in lieu of initial sync, management through Windows PowerShell, file and folder restoration from conflicts and preexisting stores, substantial performance tuning options, database recovery content merging, and huge scalability limit changes.
Today I’ll talk at a high level about how your business can benefit from this improved architecture.It assumes that you have a previous working knowledge of DFSR, to include basic replication concepts and administration using the previous tools DfsMgmt.msc or DfsrAdmin.exe and Dfsrdiag.exe. Everything I discuss below you can do right now with the Windows Server 2012 R2 Preview.
I have a series of deeper articles in the pipeline as well to get you rolling with more walkthroughs and architecture, as well as plenty of TechNet for the blog-a-phobic.
Database Cloning
DFSR Database Cloning is an optional alternative to the classic initial sync process introduced in Windows Server 2003 R2. DFSR spends most of its time in initial sync—even when administrators preseed files on the peer servers—examining metadata, staging files, and exchanging version vectors. This can make setup, disaster recovery, and hardware replacement very slow. Multi-terabyte data sets are typically infeasible due to the extended setup times; the estimate for a 100TB dataset is 159 days to complete initial sync on a LAN, if performance is linear (spoiler alert: it’s not).
DB cloning bypasses this process. At a high level, you:
1. Build a primary server with no partners (or use an existing server with partners)
2. Clone its database
3. Preseed the data on N servers
4. Build N servers using that database clone
The existing initial sync portion of DFSR is now instantaneous if there are no differences. If there are differences, DFSR only has to catch up the real delta of changes as part of a shortened initial sync process.
Cloning provides three levels of file validation during the export and import processing. These ensure that if you are allowing users to alter data on the upstream server while cloning is occurring, files are later reconciled on the downstream.
- None - No validation of files on source or destination server. Fastest and most optimistic. Requires that you preseed data perfectly and do not allow any modification of data during the clone processing on either server.
- Basic - (Default behavior). Hash of ACL stored in the database record for each file. File size and last modified date-time stored in the database record for each file. Good mix of fidelity and performance.
- Full - Same hashing mechanism used by DFSR during normal operations. Hash stored in database record for each file. Slowest but highest fidelity (and still faster than initial sync)
Some early test results
What does this mean in real terms? Let’s look at a test run with 10 terabytes of data in a single volume comprising 14,000,000 files:
“Classic” initial sync | Time to convergence |
Preseeded | ~24 days |
Now, with DB cloning:
Validation Level | Time to export | Time to import | Improvement % |
2 – Full | 9 days, 0 hours | 5 days, 10 hours | 40% |
1 – Basic | 7 hours, 35 minutes | 3 hours, 14 minutes | 98% |
0 – None | 1 hour, 19 minutes | 2 hours, 49 minutes | 99% |
With the recommended Basic validation, we’re down to 11 hours! Our 64TB tests with 70 million files only take a few days! Our 500GB/100,000 file small-scale tests finish in 4 minutes! I like exclamation points!
The Export-DfsrClone provides sample robocopy command-line at export time. You are free to preseed data any way you see fit (backup and restore, robocopy, removable storage, snapshot, etc.) as long as the hashes match and the file security/data stream/alternate data stream copy intact between servers.
You manage this feature using Windows PowerShell. The cmdlets are:
Export-DfsrClone
Import-DfsrClone
Get-DfsrCloneState
Reset-DfsrCloneState
I have a separate post coming with a nice walk through on this feature.
Wait – did I say DFSR Windows PowerShell? Oh yeah.
Windows PowerShell and WMIv2
With Windows Server 2012 and prior versions, file server administrators do not have modern object-oriented Windows PowerShell cmdlets to create, configure and manage DFS Replication. While many of the existing command line tools provide the ability to administer a DFS Replication server and a single replication group, building advanced scripting solutions for multiple servers often involves complex output file parsing and looping.
Windows Server 2012 R2 adds a suite of 42 Windows PowerShell cmdlets built on a new WMIv2 provider. Businesses benefit from a complete set of DFSR Windows PowerShell cmdlets in the following ways:
1. Allows the switch to modern Windows PowerShell cmdlets as your “common language” for managing enterprise deployments.
2. Can develop and deploy complex automation workflows for all stages of the DFSR life cycle, including provisioning, configuring, reporting and troubleshooting.
3. Allows creation of new graphical or script-based wrappers around Windows PowerShell to replace use of the legacy DfsMgmt snap-in, without the need for complex API manipulation.
List all DFSR cmdlets
To examine the 42 new cmdlets available for DFSR:
PS C:\> Get-Command –Module DFSR
For further output and explanation, use:
PS C:\> Get-Command –Module DFSR | Get-Help | Select-Object Name, Synopsis | Format-Table -Auto
We made sure to document every single DFSR Windows PowerShell cmdlet online with more than 80 sweet examples, before RTM!
Create a new two-server replication group and infrastructure
Just to get your juices flowing, you can use DFSR Windows PowerShell to create a simple two-server replication configuration using the F: drive, with my two sample servers SRV01 and SRV02:
PS C:\> New-DfsReplicationGroup -GroupName "RG01" | New-DfsReplicatedFolder -FolderName "RF01" | Add-DfsrMember -GroupName "RG01" -ComputerName SRV01,SRV02
PS C:\> Add-DfsrConnection -GroupName "RG01" -SourceComputerName SRV01 -DestinationComputerName SRV02
PS C:\> Set-DfsrMembership -GroupName "RG01" -FolderName "RF01" -ContentPath "F:\RF01" -ComputerName SRV01 -PrimaryMember $True
PS C:\> Set-DfsrMembership -GroupName "RG01" -FolderName "RF01" -ContentPath "F:\RF01" -ComputerName SRV02
PS C:\> Get-DfsrMember | Update-DfsrConfigurationFromAD
Some slick things happening here, such as creating the RG, RF, and members all in a single step, only having to run one command to create connections in both directions, and even polling AD on all computers at once! I have a lot more to talk about here – things like wildcarding, collections, mass edits, multiple file hashing; this is just a taste.
Performance Tuning
Microsoft designed DFSR initial sync and ongoing replication behaviors in Windows Server 2003 R2 for the enterprises of 2005: smaller files, slower networks, and smaller data sets. Eight years later, much more data in larger files over wider networks have become the norm.
Windows Server 2012 R2 modifies two aspects of DFSR to allow new performance configuration options:
- Cross-file RDC toggling
- Staging minimum file size
Cross-File RDC Toggling
Remote Differential Compression (RDC) takes a staged and compressed copy of a file and creates MD-4 signatures based on “chunks” of files.
Mark, I stole your pretty diagram and owe you one beer.
When a user alters a file (even in the middle), DFSR can efficiently see which signatures changed and then send along the matching data blocks. E.g., a 50MB document edited to change one paragraph only replicates a few KB.
Cross-file RDC takes this further by using special hidden sparse files (located in <drive>:\system volume information\dfsr\similaritytable_x and idrecordtable_x) to track all these signatures. With them, DFSR can use other similar files that the server already has to build a copy of a new file locally. DFSR can use up to five of these similar files. So if an upstream server decides “I have file X and here are its RDC signatures”, the downstream server can decide “I don’t have file X. But I do have files Y and Z that have some of the same signatures, so I’ll grab data from them locally and save having to request all of file X.” Since files are often just copies of other files with a little modification, DFSR gains considerable over-the-wire efficiency and minimizes bandwidth usage on slower, narrower WAN links.
The downside to cross-file RDC is that over time with many millions of updates and signatures, DFSR may see increased CPU and disk IO while processing similarity needs. Additionally, when replicating on low-latency, high-bandwidth networks like LANS and high-end WANs, it may be faster to disable RDC and Cross-File RDC and simply replicate file changes without the chunking operations. Windows Server 2012 R2 offers this option using Set-DfsrConnection and Add-DfsrConnection Windows PowerShell cmdlets with the–DisableCrossFileRdc parameter.
Staging File Size Configuration
DFSR creates a staging folder for each replicated folder. This staging folder contains the marshalled files sent between servers, and allows replication without risk of interruption from subsequent handles to the file. By default, files over 256KB stage during replication, unless RDC is enabled and using its default minimum file size, in which case files over 64KB are staged.
When replicating on low-latency, high-bandwidth networks like LANS and high-end WANs, it may be faster to allow certain files to replicate without first staging. If users do not frequently reopen files after modification or addition to a content set – such as during batch processing that dumps files onto a DFSR server for replication out to hundreds of nodes without any later modification – skipping the RDC and the staging process can lead to significant performance boosts. You configure this using Set-DfsrMembership and the –MinimumFileStagingSize parameter.
Database Recovery
DFSR can suffer database corruption when the underlying hardware fails to write the database to disk. Hardware problems, controller issues, or write-caching preventing flushing of data to the storage medium can cause corruption. Furthermore, when the DFSR service does not stop gracefully – such as during power loss to the underlying operating system – the database becomes “dirty”.
DB Corruption Merge Recovery
When DFSR on Windows Server 2012 and older operating systems detects corruption, it deletes the database and recreates it without contents. DFSR then walks the file system and repopulates the database with each file fenced FRS_FENCE_INITIAL_SYNC (1). Then it triggers non-authoritative initial sync inbound from a partner server. Any file changes made on that server that had not replicated outbound prior to the corruption move to the ConflictAndDeleted or PreExisting folders, and end-users will perceive this as data loss, leading to help desk calls. If multiple servers experienced corruption – such as when they were all on the same hypervisor host or all using the same malfunctioning storage array –all servers may stop replicating, as they are all waiting on each other to return to a normal state. If the writable server with corruption was replicating with a read-only server, the writable server will not be able to return to a normal state.
In Windows Server 2012 R2, DFSR changes its DB corruption recovery behavior. It deletes and recreates the database, then walks the file system and populates the DB with all file records. All files are fenced with the FRS_FENCE_DEFAULT (3) flag though, marking them as normal. The service then triggers initial sync. When subsequent version vector sync reconciles the new DB content with a remote partner, DFSR handles conflicts in the usual way (last writer/creator wins) – since most (if not all) records are marked normal already though, there is no need for conflict handling on matching records. If the remote partner is read-only, DFSR skips attempting to pull changes from the remote partner (since none can come), and goes back to a healthy state.
DB Dirty Shutdown Recovery
DFSR on Windows Server 2012 and Windows Server 2008 R2 detects dirty database shutdown and pauses replication on that volume, then writes DFSR event log warning 2213:
Warning
5/1/2013 13:15
DFSR
2213
None
"The DFS Replication service stopped replication on volume C:. This occurs when a DFSR JET database is not shut down cleanly and Auto Recovery is disabled. To resolve this issue, back up the files in the affected replicated folders, and then use the ResumeReplication WMI method to resume replication.
Additional Information:
Volume: C:
GUID: <some GUID>
Recovery Steps
1. Back up the files in all replicated folders on the volume. Failure to do so may result in data loss due to unexpected conflict resolution during the recovery of the replicated folders.
2. To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig class. For example, from an elevated command prompt, type the following command:
wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="<some GUID>" call ResumeReplication
Until you manually resume replication via WMI or disable this functionality via the registry, DFSR does not resume. When resumed, DFSR performs operations similar to DB corruption recovery, marking the files normal and synchronizing differences. The main problem here with this strategy is far too many people were missing the event and not noticing that replication wasn’t running anymore. Another good reason to monitor your DFSR servers.
In Windows Server 2012 R2, DFSR role installation sets the following registry value by default (and if not set to a value, the service treats it as set):
Key: HKey_Local_Machine\System\CurrentControlSet\Services\DFSR\Parameters
Value [DWORD]: StopReplicationOnAutoRecovery
Data: 0
We performed code reviews to ensure that no issues with dirty shutdown recovery would lead to data loss; we released one hotfix for previous OSes based on this (see KB: http://support.microsoft.com/kb/2780453) but find further issues here.
Furthermore, the domain controller SYSVOL replica was special-cased so that if it is the only replica on a specific volume and that volume suffers a dirty shutdown, SYSVOL always automatically recovers regardless of the registry setting. The AD admins who don’t know or care that DFSR is replicating their SYSVOL no longer have to worry about such things.
Preserved File Recovery (The Big Finish!)
DFSR uses a set of conflict-handling algorithms during initial sync and ongoing replication to ensure that the appropriate files replicate between servers.
1. During non-authoritative initial sync, cloning, or ongoing replication: files with the same name and path modified on multiple servers move to the following folder on the losing server: <rf>\Dfsrprivate\ConflictAndDeleted
2. Initial sync or cloning: files with the same name and path that exist only on the downstream server go to <rf>\Dfsrprivate\PreExisting
3. During ongoing replication: files deleted on a server move to the following folder on all other servers: <rf>\Dfsrprivate\ConflictAndDeleted
The ConflictAndDeleted folder has a 4GB first in/first out quota in Windows Server 2012 R2 (660MB in older operating systems). The PreExisting folder has no quota. When content moves to these folders, DFSR tracks it in the ConflictAndDeletedManifest.xml and PreExistingManifest.xml. DFSR deliberately mangles all files and folders in the ConflictAndDeleted folder with version vector information to preserve uniqueness. DFSR deliberately mangles the top-level files and folders in the PreExisting folder with version vector information to preserve uniqueness. Previous operating systems did not provide a method to recover data from these folders, and required use of out-of-band script options like RestoreDfsr.vbs (I am rather embarrassed to admit that I wrote that script; my excuse is that it was supposed to be a quick fix for a late night critsit and was never meant to live on for years. Oh well).
Windows Server 2012 R2 now includes Windows PowerShell cmdlets to recover this data. These cmdlets offer the option to either move or copy files, restore to original or a new location, restore all versions of a file or just the latest, as well as perform inventory operations.
A few samples
To see conflicted and deleted files on the H:\rf04 replicated folder:
PS C:\> Get-DfsrPreservedFiles –Path h:\rf04\DfsrPrivate\ConflictAndDeletedManifest.xml
Let’s get fancier. To see only the conflicted and deleted DOCX files and their preservation times:
PS C:\> Get-DfsrPreservedFiles –Path H:\rf04\DfsrPrivate\ConflictAndDeletedManifest.xml | Where-Object path -like *.docx | Format-Table path,preservedtime -auto -wrap
How about if we restore all files from the PreExisting folder, moving them rather than copying them, placing them back in their original location, super-fast:
PS C:\> Restore-DfsrPreservedFiles –Path H:\rf04\DfsrPrivate\PreExistingManifest.xml -RestoreToOrigin
Slick!
Summary
We did a ton of work in DFSR in the past few months in order to address many of your long running concerns and bring DFSR into the next decade of file replication; I consider the DB cloning feature to be truly state of the art for file replication or synchronization technologies. We hope you find all these interesting and useful. Stand by for more new blog posts on cloning, Windows PowerShell, reliability, and more – coming soon.
- Ned “there were no prequels” Pyle
To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.