Quantcast
Channel: Storage at Microsoft
Viewing all 268 articles
Browse latest View live

Introducing DFS Namespaces Windows PowerShell Cmdlets

$
0
0

Overview

In this blog post, let me introduce you to the new DFS Namespaces (DFSN) Windows PowerShell cmdlets that we have added in Windows Server 2012.

Windows PowerShell is designed for automation and complex scripting, in part due to its powerful pipelining feature. Its object-based design streamlines user experience into focusing on the tasks you want to accomplish, and allows sophisticated output and input data processing. If you need further convincing on the value in using it for all your system administration needs, check out the extensive Windows PowerShell TechNet material.

The DFSN PS cmdlets cover most of the DFSN server-side management functionality that was previously available through the dfsutil command. As a side note, note that DFSN client-side management functionality, e.g. flush the local referral cache, is not yet supported through the new cmdlets – this functionality of course continues to be available through the “dfsutil client ..” command line tool. The really nice thing about the DFSN PS cmdlets is that while they run only on Windows Server 2012 or Windows 8 computers, you can use them to manage DFS Namespaces hosted even on previous Windows Server versions: Windows Server 2008, and Windows Server 2008 R2.

I suspect most of you reading this blog post have used DFS Namespaces for a while, so the namespace concepts and terms in the following discussion should be very familiar. In case you need to refresh your terminology, Overview of DFS Namespaces is a good one to refer to.

At the top-level, the new cmdlets fall into one of the following categories, here is the quick tour:

  1. Namespace-scoped: Each DFS namespace presents one virtual folder view. This set of cmdlets operates on one or more such DFS namespace(s).
  2. Namespace root target-scoped: Each DFS namespace can have one or more root targets - think of a root target as an SMB share presenting the namespace folder structure. This set of cmdlets acts on a root target.
  3. Namespace server-scoped: Each DFS namespace server can host one or more namespace root targets. This set of cmdlets acts on a namespace server at the aggregate level.
  4. Namespace folder-scoped: Each DFS namespace consists typically of a number of namespace folders organized in a virtual folder hierarchy. This set of cmdlets acts on one or more such namespace folders.
  5. Namespace folder target-scoped: Each “bottom-most” DFS namespace folder or a leaf node in the folder hierarchy is associated with one or more folder targets where the real data is stored (such folders with associated folder targets were referred to as “links” in previous versions). This set of cmdlets acts on one or more such namespace folder targets.

Let us explore each of these categories of cmdlets in the same order.

Namespace-scoped cmdlets

This set of cmdlets provides Get/Set/New/Remove operations (called verbs in PS parlance) on a “DfsnRoot” object – which represents a DFS namespace.

Cmdlet

Description

Get-DfsnRoot

The Get-DfsnRoot cmdlet retrieves the configuration settings for the specified – or all the known - namespaces.

New-DfsnRoot

The New-DfsnRoot cmdlet creates a new DFS namespace with the specified configuration settings.

Set-DfsnRoot

The Set-DfsnRoot cmdlet modifies the configuration settings for the specified existing DFS namespace.

Remove-DfsnRoot

The Remove-DfsnRoot cmdlet deletes an existing DFS namespace.

Here are a few examples:

  • Get the namespace information for a standalone namespace \\Contoso_fs\Public

PS C:\> Get-DfsnRoot –Path \\Contoso_fs\Public | Format-List

Path : \\Contoso_fs\Public

Description : Standalone test namespace

Type : Standalone

State : Online

Flags : Site Costing

TimeToLiveSec : 300

  • Create a new Windows Server 2008 mode namespace

PS C:\> New-DfsnRoot –Path \\corp.Contoso.com\Sales -TargetPath \\contoso_fs\Sales -Type Domainv2 | Format-List

Path : \\corp.Contoso.com\Sales

Description : Domain-based test namespace

Type : Domain V2

State : Online

Flags :

TimeToLiveSec : 300

Note: TargetPath is the path to an SMB share to be used as the root target for this namespace. It is just as easy to create the SMB share using Windows PowerShell, run the following on file server \\contoso_fs:

New-Item C:\Sales_root_folder –type directory

New-SmbShare –Name Sales –Path C:\Sales_root_folder

PS C:\> Set-DfsnRoot –Path \\corp.Contoso.com\Sales -EnableRootScalability $true -TimeToLive 400

Path : \\corp.Contoso.com\Sales

Description : Domain-based test namespace

Type : Domain V2

State : Online

Flags : Root Scalability

TimeToLiveSec : 400

  • Remove a domain-based namespace

PS C:\> Remove-DfsnRoot -Path \\corp.Contoso.com\Sales -Force

Namespace Root Target-scoped

These cmdlets support the same Get/Set/New/Remove operations, but on root targets. And remember that there can be multiple active root targets for a domain-based DFS namespace (which is why domain-based namespaces are generally the recommended option).

Cmdlet

Description

Get-DfsnRootTarget

The Get-DfsnRootTarget cmdlet by default retrieves all the configured root targets for the specified namespace root, including the configuration settings of each root target.

New-DfsnRootTarget

The New-DfsnRootTarget cmdlet adds a new root target with the specified configuration settings to an existing DFS namespace.

Set-DfsnRootTarget

The Set-DfsnRootTarget cmdlet sets configuration settings to specified values for a namespace root target of an existing DFS namespace.

Remove-DfsnRootTarget

The Remove-DfsnRootTarget cmdlet deletes an existing namespace root target of a DFS namespace.

Here are a few examples:

  • Retrieve the namespace root target information for a domain-based namespace \\corp.Contoso.com\Sales, it has two root targets in this example.

PS C:\> Get-DfsnRootTarget –Path \\corp.Contoso.com\Sales | Format-List

Path : \\corp.Contoso.com\Sales

TargetPath : \\contoso_fs\Sales

State : Online

ReferralPriorityClass : sitecost-normal

ReferralPriorityRank : 0

Path : \\corp.Contoso.com\Sales

TargetPath : \\contoso_fs_2\Sales

State : Online

ReferralPriorityClass : sitecost-normal

ReferralPriorityRank : 0

  • Add a new root target \\contoso_fs_3\Sales to an existing domain-based namespace, \\corp.Contoso.com\Sales

PS C:\> New-DfsnRootTarget –Path \\corp.Contoso.com\Sales -TargetPath \\contoso_fs_3\Sales

Path : \\corp.Contoso.com\Sales

TargetPath : \\contoso_fs_3\Sales

State : Online

ReferralPriorityClass : sitecost-normal

ReferralPriorityRank : 0

PS C:\> Set-DfsnRootTarget –Path \\corp.Contoso.com\Sales -TargetPath \\contoso_fs_2\Sales -ReferralPriorityClass globallow

Path : \\corp.Contoso.com\Sales

TargetPath : \\contoso_fs_2\Sales

State : Online

ReferralPriorityClass : global-low

ReferralPriorityRank : 0

  • Remove a domain-based namespace root target \\contoso_fs_2\Sales for a domain namespace \\corp.Contoso.com\Sales

PS C:\> Remove-DfsnRootTarget -Path \\corp.Contoso.com\Sales –TargetPath \\contoso_fs_2\Sales

Namespace server-scoped

These two cmdlets operate on the namespace server overall – support Get/Set on the “DfsnServerConfiguration” object.

Cmdlet

Description

Get-DfsnServerConfiguration

The Get-DfsnServerConfiguration cmdlet retrieves the configuration settings of the specified DFS namespace server.

Set-DfsnServerConfiguration

The Set-DfsnServerConfiguration cmdlet modifies configuration settings for the specified server hosting DFS namespace(s).

Here are a few examples:

  • Retrieve the namespace server configuration

PS C:\> Get-DfsnServerConfiguration –ComputerName contoso_fs | Format-List

ComputerName : contoso_fs

LdapTimeoutSec : 30

PreferLogonDC :

EnableSiteCostedReferrals :

EnableInsiteReferrals :

SyncIntervalSec : 3600

UseFqdn : False

  • Set the Sync interval for the namespace server contoso_fs to 7200 seconds

PS C:\> Set-DfsnServerConfiguration –ComputerName contoso_fs -SyncIntervalSec 7200 | Format-List

ComputerName : contoso_fs

LdapTimeoutSec : 30

PreferLogonDC :

EnableSiteCostedReferrals :

EnableInsiteReferrals :

SyncIntervalSec : 7200

UseFqdn : False

Namespace Folder-scoped

This set of cmdlets operates on a DFS namespace folder path. In addition to Get/Set/New/Remove operations on a “DfsnFolder” object, renaming (Move) is also supported. Further, retrieving (Get), setting (Grant), revoking (Revoke) and removing (Remove) enumerate access on namespace folders is also supported through this set of cmdlets.

Cmdlet

Description

New-DfsnFolder

The New-DfsnFolder cmdlet creates a new folder in an existing DFS namespace with the specified configuration settings.

Get-DfsnFolder

The Get-DfsnFolder cmdlet retrieves configuration settings for the specified, existing DFS namespace folder.

Set-DfsnFolder

The Set-DfsnFolder cmdlet modifies settings for the specified existing DFS namespace folder with folder targets.

Move-DfsnFolder

The Move-DfsnFolder cmdlet moves an existing DFS namespace folder to an alternate specified location in the same DFS namespace.

Grant-DfsnAccess

The Grant-DfsnAccess cmdlet grants access rights to the specified user/group account for the specified DFS namespace folder with folder targets.

Get-DfsnAccess

The Get-DfsnAccess cmdlet retrieves the currently configured access rights for the specified DFS namespace folder with folder targets.

Revoke-DfsnAccess

The Revoke-DfsnAccess cmdlet revokes the right to access a DFS namespace folder with folder targets or enumerate its contents from the specified user or group account.

Remove-DfsnAccess

The Remove-DfsnAccess cmdlet removes the specified user/group account from access control list (ACL) of the DFS namespace folder with folder targets

Remove-DfsnFolder

The Remove-DfsnFolder cmdlet deletes an existing DFS namespace folder with a folder target.

Here are some examples for this set of cmdlets:

  • Create a new namespace folder data1 under a domain-based namespace \\corp.Contoso.com\Sales pointing to a folder target of \\contoso_fs\df1 and using client failback mode

PS C:\> New-DfsnFolder -Path \\corp.Contoso.com\Sales\data1 -TargetPath \\contoso_fs\df1 -Description "My Data set 1" -EnableTargetFailback $true | Format-List

Path : \\corp.Contoso.com\Sales\data1

Description : My Data set 1

State : Online

Flags : Target Failback

TimeToLiveSec : 300

  • Get the properties of a namespace folder data1

PS C:\> Get-DfsnFolder -Path \\corp.Contoso.com\Sales\data1 | Format-List

Path : \\ corp.Contoso.com\Sales\data1

Description : My Data set 1

State : Online

Flags : Target Failback

TimeToLiveSec : 300

  • Set the EnableInsiteReferrals property of a namespace folder data1 (this example of combining target failback with in-site-only referrals makes practical sense only if there are multiple folder targets for data1 that are in the same site as the client)

PS C:\> Set-DfsnFolder -Path \\corp.Contoso.com\Sales\data1 -EnableInsiteReferrals $true | Format-List

Path : \\ corp.Contoso.com\Sales\data1

Description : My Data set 1

State : Online

Flags : {Target Failback, Insite Referrals}

TimeToLiveSec : 300

  • Rename a namespace folder data1 to dataset1

PS C:\> Move-DfsnFolder -Path \\corp.Contoso.com\Sales\data1 -NewPath \\corp.Contoso.com\Sales\dataset1 -Force

  • Grant enumerate access to User22 for the folder dataset1

PS C:\> Grant-DfsnAccess -Path \\corp.Contoso.com\Sales\dataset1 -AccountName Contoso\User22 | Format-List

Path : \\corp.Contoso.com\Sales\dataset1

AccountName : Contoso\User22

AccessType : enumerate

  • Get ACLs for namespace folder dataset1 (let’s say User44 was also granted access to the same folder)

PS C:\> Get-DfsnAccess -Path \\corp.Contoso.com\Sales\dataset1 | Format-List

Path : \\corp.Contoso.com\Sales\dataset1

AccountName : Contoso\User22

AccessType : enumerate

Path : \\corp.Contoso.com\Sales\dataset1

AccountName : Contoso\User44

AccessType : enumerate

  • Revoke access for User22 for namespace folder dataset1

PS C:\> Revoke-DfsnAccess -Path \\corp.Contoso.com\Sales\dataset1 -AccountName Contoso\User22 | Format-List

Path : \\corp.Contoso.com\Sales\dataset1

AccountName : Contoso\User22

AccessType : none

Path : \\corp.Contoso.com\Sales\dataset1

AccountName : Contoso\User44

AccessType : enumerate

  • Remove a user User22 from the access control list for namespace folder dataset1

PS C:\> Remove-DfsnAccess -Path \\corp.Contoso.com\Sales\dataset1 -AccountName Contoso\User22

A Get-DfsnAccess on the same path would now show the following:

Path : \\corp.Contoso.com\Sales\dataset1

AccountName : Contoso\User44

AccessType : enumerate

  • Remove a namespace folder dataset1

PS C:\> Remove-DfsnFolder -Path \\corp.Contoso.com\Sales\dataset1 -Force

Namespace Folder Target-scoped

This set of cmdlets operates on one or more folder target(s) of a namespace folder. Specifically, the same four operations Get/Set/New/Remove are supported on the “DfsnFolderTarget” object.

Cmdlet

Description

New-DfsnFolderTarget

The New-DfsnFolderTarget cmdlet adds a new folder target with the specified configuration settings to an existing DFS namespace folder.

Get-DfsnFolderTarget

The Get-DfsnFolderTarget cmdlet retrieves configuration settings of folder target(s) of an existing DFS namespace folder.

Set-DfsnFolderTarget

The Set-DfsnFolderTarget cmdlet modifies settings for the folder target of an existing DFS namespace folder.

Remove-DfsnFolderTarget

The Remove-DfsnFolderTarget cmdlet deletes a folder target of an existing DFS namespace folder.

Here are some examples for this set of cmdlets:

  • Add a new namespace folder target \\contoso_fs2\df1 for the namespace folder \\corp.Contoso.com\Sales\dataset1

PS C:\> New-DfsnFolderTarget -Path \\corp.Contoso.com\Sales\dataset1 -TargetPath \\contoso_fs2\df1 | fl

Path : \\corp.Contoso.com\Sales\dataset1

TargetPath : \\contoso_fs2\df1

State : Online

ReferralPriorityClass : sitecost-normal

ReferralPriorityRank : 0

  • Retrieve all the folder targets for the namespace folder \\corp.Contoso.com\Sales\dataset1

PS C:\> Get-DfsnFolderTarget -Path \\corp.Contoso.com\Sales\dataset1 | fl

Path : \\corp.Contoso.com\Sales\dataset1

TargetPath : \\contoso_fs\df1

State : Online

ReferralPriorityClass : sitecost-normal

ReferralPriorityRank : 0

Path : \\corp.Contoso.com\Sales\dataset1

TargetPath : \\contoso_fs2\df1

State : Online

ReferralPriorityClass : sitecost-normal

ReferralPriorityRank : 0

  • Set the folder target state for \\contoso_fs2\df1 to offline

PS C:\> Set-DfsnFolderTarget -Path \\corp.Contoso.com\Sales\dataset1 -TargetPath \\contoso_fs2\df1 -State Offline | fl

Path : \\corp.Contoso.com\Sales\dataset1

TargetPath : \\contoso_fs2\df1

State : Offline

ReferralPriorityClass : sitecost-normal

ReferralPriorityRank : 0

  • Remove the folder target \\contoso_fs2\df1 for the namespace folder \\corp.Contoso.com\Sales\dataset1

PS C:\> Remove-DfsnFolderTarget -Path \\corp.Contoso.com\Sales\dataset1 -TargetPath \\contoso_fs2\df1 -Force

Conclusion

I hope this gave you a decent overview of the new DFSN cmdlets in Windows Server 2012. And hope you will start using them soon!

Just be sure to download the "Windows Server 2012 and Windows 8 client/server readiness cumulative update" before you start working with the DFSN PS cmdlets, as the update includes a couple of fixes related to DFSN PS cmdlets. I am told that this update should be eventually available as a General Distribution Release (GDR) on Windows Update, but why wait? You can download it today and start playing with the cmdlets!


Server for NFS PowerShell Cmdlets

$
0
0

 

This post covers the Server for NFS PowerShell cmdlets in Windows Server 2012 with brief description about each of them.

You can get Server for NFS PowerShell cmdlets by installing “Server for NFS” role service from the File and Storage Services or by installing “Services for Network File System Management Tools” from Remote Server Administration Tools. 

You can list the entire Server for NFS PowerShell cmdlets by running the following command 

Get-Command –Module NFS

The following table lists Server for NFS cmdlets grouped by functionality.

 

Group

Cmdlets

Description

Share

Get-NfsShare

Lists all the shares on the server along with their properties.

New-NfsShare

Creates a new share on the server.

The share can be a standard share or a clustered share.

Set-NfsShare

Modifies the share configuration of standard as well as clustered shares.

Remove-NfsShare

Deletes NFS shares from the server. The share can be a standard or clustered share.

Share Permission

Get-NfsSharePermission

Retrieves the permissions on a share.

Grant-NfsSharePermission

Adds or modifies share permissions.

 

Permissions can be granted to individual hosts, global (all machines) or to groups such as clientgroup or netgroups.

 

Allows granting read only, read write or no access to clients.

Revoke-NfsSharePermission

Removes permissions for a given client on a share.

Server Configuration

Get-NfsServerConfiguration

Retrieves the Server for NFS configuration.

Set-NfsServerConfiguration

Modifies Server for NFS configuration.

Client Configuration

Get-NfsClientConfiguration

Retrieves Client for NFS configuration.

Set-NfsClientConfiguration

Modifies Client for NFS configuration. Multiple client configuration properties can be modified at the same time.

Netgroup Store

Set-NfsNetgroupStore

Modifies the netgroup source configuration on the server. The netgroup source can be Active Directory, RFC2307 compliant LDAP server or NIS server.

Get-NfsNetgroupStore

Retrieves the netgroup source configuration on the server. The server can be configured to use Active Directory, RFC2307 compliant LDAP or NIS server netgroup stores as its netgroup source.

Identity Mapping Store

Get-NfsMappingStore

Retrieves the identity mapping source on the server or client. The identity mapping source can be configured to use local files such as passwd and group files, Active Directory, RFC2307 compliant LDAP server or a User Name Mapping server as its identity mapping source.

Set-NfsMappingStore

Modifies the mapping store on NFS server or client.

Install-NfsMappingStore

Installs and configures an Active Directory Lightweight Directory Service server as mapping store.

 

The cmdlet installs the AD LDS role, creates an instance for the mapping store and also adds the schema required for the UID and GID attributes for user/group objects.

 

Test-NfsMappingStore

Verifies if the mapping store on the server has been configured correctly. It verifies that the mapping store is reachable and also checks if necessary schema is installed on the server and if the domain functional level is Windows Server 2003 R2 and above in case of domain based mapping store.

Identity Mapping

Get-NfsMappedIdentity

Lists all the mapping between a user's UNIX and Windows accounts from the identity mapping source.

The cmdlet can retrieve mapping from various mapping sources. The mapping source can be Active Directory, RFC2307 compliant LDAP server or mapping files (passwd/group files). If the mapping source is not specified, the cmdlet uses server’s mapping source configuration to retrieve the information.

New-NfsMappedIdentity

Creates a new mapping between windows user (or group) account to corresponding UNIX identifier.

 

The mapping store can be Active Directory or RFC2307 compliant LDAP server.

 

User or group account is created if they don’t exist.

Set-NfsMappedIdentity

Modifies or sets a mapping between a Windows user/group account to UNIX identifiers.

The mapping store can be Active Directory or RFC2307 compliant LDAP server.

Remove-NfsMappedIdentity

Removes a mapping from a user or group windows account.

Resolve-NfsMappedIdentity

Checks that the server can resolve a mapping for given user or group account name to UNIX identifier and vice versa.

The server uses its identity mapping source configuration to retrieve the mapping. 

Test-NfsMappedIdentity

Verifies existing mapped identities and checks if they are configured correctly.

The cmdlet will check for duplicate UID\GID and also validates the group membership for user accounts as per GID assignment.

Client

Get-NfsMountedClient

Enumerates the clients connected to Server for NFS using NFS v4.1.

Revoke-NfsMountedClient

Revoke a client V4.1 connection to Server for NFS.

Clientgroup

Get-NfsClientgroup

Lists all the clientgroups on the Server for NFS.

New-NfsClientgroup

Creates a new clientgroup on the server. Members can also be added to the new clientgroup at the time of creation.

Set-NfsClientgroup

Adds or removes members from a clientgroup. Multiple members can be added or removed in a single command.

Rename-NfsClientgroup

Renames a clientgroup.

Remove-NfsClientgroup

Deletes a clientgroup from the server.

Netgroup

Get-NfsNetgroup

Enumerates netgroups configured in Active Directory, RFC2307 compliant LDAP server or NIS server.

Remove-NfsNetgroup

Deletes a netgroup from Active Directory or LDAP server.

New-NfsNetgroup

Creates a new netgroup in Active Directory or LDAP server. Members can also be added to netgroup at the time of creation.

Set-NfsNetgroup

Adds or removes members from a netrgroup. The netgroup store can be Active Directory or RFC2307 compliant LDAP server.

Lock

Get-NfsClientLock

List the locks opened by a client on the server. The cmdlet lists both the NLM and NFS v4.1 byte range locks.

Revoke-NfsClientLock

Revokes locks on a given set of files or locks held by a given client computer.

Open File

Revoke-NfsOpenFile

Revokes open state and handles for files opened by clients using NFS V4.1 to Server for NFS.

Get-NfsOpenFile

Enumerates file opened using NFS V4.1 on Server for NFS.

Session

Disconnect-NfsSession

Disconnect an NFS V4.1 session on Server for NFS.

Get-NfsSession

List the currently open V4.1 sessions on Server for NFS.

Statistics

Reset-NfsStatistics

Resets the statistics on Server for NFS.

Get-NfsStatistics

Enumerates NFS and MOUNT statistics on the server

 

Feedback

Please send feedback you might have to nfsfeed@microsoft.com

Backup and Restore of Server for NFS Share Settings

$
0
0

Introduction

Windows Server 2012 ships with a rich set of PowerShell cmdlets to perform most of the Server for NFS share management operations. Using these cmdlets as building blocks, administrators can easily build backup and restore scripts for the NFS share settings and permissions that best suits their needs. This post demonstrates how Export-CliXml and Import-CliXml cmdlets can be used to backup and restore Server for NFS share settings.

Server for NFS share management cmdlets

 Let us take a quick look at the Server for NFS share management cmdlets in Windows Server 2012. If you are not already familiar with these cmdlets please look at their help to get more details on using these cmdlets.

Share cmdlets

 

  •  Get-NfsShare – Enumerates shares on the server
  •  New-NfsShare – Creates a new share
  •  Remove-NfsShare – Deletes one or more shares
  •  Set-NfsShare – To modify the share settings

Share permission cmdlets

 

  • Get-NfsSharePermission – Enumerates NFS share permissions
  • Grant-NfsSharePermission – To add or modify share permissions
  • Revoke-NfsSharePermission – To remove permission for a client

Exporting Shares Settings

The Export-CliXml cmdlet can be used to export the objects to XML files. Similarly the Import-CliXml cmdlet can be used to import the content of an XML files (generated using Export-CliXml) to their corresponding objects in Windows PowerShell. If you are not familiar with these cmdlets please refer to the PowerShell help section.

To export all the NFS shares on the server invoke Get-NfsShare and pipe the results to Export-CliXml cmdlet.

Get-NfsShare | Export-CliXml -Path c:\shares.xml


Running the above command saves only the share settings into the file. It does not save share permissions. Exporting share permission is covered in next section as the permissions are handled using another set of cmdlets.

The following share settings are exported to the file 

Name

NetworkName

Path

IsClustered

IsOnline

AnonymousAccess

AnonymousGid

AnonymousUid

Authentication

UnmappedUserAccess

 

To export a single share, use the following command. In addition you can also filter the shares that you want to save by using wild cards for the share name, path and network name. See Get-NfsShare help for more details on using it.

Get-NfsShare -name shareA | Export-CliXml -Path c:\shares.xml

 

Exporting Share Permissions

 To save permissions of all the shares, use Get-NfsShare first to enumerate all the shares on the server. Use a PowerShell pipeline to pass the result to Get-NfsSharePermission to get the permissions for each of the shares. The result of these two commands can then be saved to an export file using the Export-CliXml cmdlet.

Get-NfsShare | Get-NfsSharePermission | Export-CliXml -Path c:\SharePermissions.xml


 Similarly to export the permissions of a single share specify the share name in Get-NfsSharePermission. 

Get-NfsSharePermission shareA | Export-CliXml -Path c:\SharePermissions.xml

 

Importing Shares Settings

 If you have an export file generated using the Export-CliXml containing the share configuration follow these steps to import them on Server for NFS running Windows Server 2012.

 We have the choice of using either New-NfsShare or Set-NfsShare when performing the import operation. Use Set-NfsShare if the share already exists on the server and you would like to overwrite it with settings from the exported file.

 To create new shares on the server using the export file, use Import-CliXml to read the export file and create objects that can be given as input to New-NfsShare. The following example creates new shares on the server.

Import-CliXml c:\shares.xml | New-NfsShare


The screen shot shows creation of new shares on the server after importing the shares from the export file.

 If the shares already exist on the server, the settings of the shares can be restored to that of the export file by piping the output of Import-CliXml to Set-NfsShare cmdlet

Import-CliXml c:\shares.xml | Set-NfsShare

Here is an example to show how this works. 

The server has a share with name "ShareA". The "AnonymousGid" and "AnonymousUid" properties of this share are -2 and -2 respectively. The export file "shares.xml" is imported and the share is modified using Set-NfsShare cmdlet which changes the "AnonymousGid" and "AnonymousUid" property of the share to 100 and 200 respectively. 

 

Importing Share Permissions

Before we talk about importing share permissions let’s briefly look at the cmdlets that will be used to perform this operation. 

A client can be granted readwrite, readonly or no access permission on a share. The client referred to here can be a host machine or a group such as netgroup or clientgroup. Grant-NfsSharePermission cmdlet can be used to add permission for a client if it doesn’t already exist. If permission for a client is already present on the share it can also be modified using same Grant-NfsSharePermisison cmdlet.  

To remove client from the list of permissions on a share, use Revoke-NfsSharePermission. 

Now let’s get back to importing share permissions from the file. If you have an exported file generated using Export-CliXml that contains the permissions for the shares, use this command to import those permissions.

Import-CliXml C:\sharespermission.xml | foreach{ $_ | Grant-NfsSharePermission}

The import command used above has following implications  

  1. Permissions are added to the share if the permissions do not exist
  2. If the permission being imported exists both on the share and in the export file, then the client permission on the share will be overwritten
  3. If the permission being imported exists only the share but not in the export file, then the client permission on the share will be retained

Note: If you don’t want to retain existing share permissions on the share, remove them before importing from the file.

 Example:

 In this example, shareA has following share permissions 

  • Host machine "nfs-node1" has read write access and root access disabled
  • Global permission also known as "All Machines" has read only access and root access is enabled.

 The exported file has  

  • Host machine "nfs-fileserver" has read only permission
  • "All Machines" has no access and root access is disabled.

 After the import operation, the share permissions for ShareA would be 

  • The permission for host “nfs-fileserver” is added to the share
  • The "All Machines" permission has changed from read only and root access enabled to no access and root access disabled.
  • The permission for “nfs-node1” is retained and is not modified. 

 

 

 

Feedback

Please send feedback you might have to nfsfeed@microsoft.com

 

SMI-S Requirements are live on MSDN

$
0
0

Whew! It took a while but the SMI-S Requirements for Windows Server 2012 and SC VMM are live on MSDN. Just click here. These will carry forward to the next releases.

Introducing Work Folders on Windows Server 2012 R2

$
0
0

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Windows Server 2012 R2 Work Folders and how it applies to Brad’s larger topic of “People-centric IT.”  To read that post and see the other technologies discussed, read today’s post:  “Making Device Users Productive and Protecting Corporate Information.” 

Hello, my name is Nir Ben-Zvi and I work in the Windows Server team. I’m very excited to introduce to you Windows Server Work Folders, which is a new file server based sync solution in Windows Server 2012 R2 and Windows 8.1.

During Windows Server 2012 R2 planning, we noticed two converging trends around managing and protecting corporate data:

· Users: “I need to work from anywhere on my different devices”

· IT: “I’d like to empower my Information Workers (users) while reducing information leakage and keeping control of the corporate data that is sprawled across devices”

Work Folders enables IT administrators to provide Information Workers the ability to sync their work data on all their devices wherever they are while remaining in compliance with company policies. This is done by syncing user data from devices to on-premise file servers, which are now extended to include a new sync protocol.

Work Folders as Experienced by an Information Worker

To show how this works, here’s an example of how an information worker, Joe, might use Work Folders to separate his work data from his personal data while having the ability to work from any device: When Joe saves a document on his work computer in the Work Folders directory, the document is synced to an IT-controlled file server. When Joe returns home, he can pick up his Surface RT (where the document is already synced) and head to the beach. He can work on the document offline, and when he returns home the document is synced back with the file server and all of the changes are available to him the next day when he returns to the office.

Looks familiar? Indeed, this is how consumer storage services such as SkyDrive and business collaboration services such as SkyDrive Pro work. We kept the user interaction simple and familiar so that there is little user education required. The biggest difference from SkyDrive or SkyDrive Pro is that the centralized storage for Work Folders is an on-premise file server running Windows Server 2012 R2, but we’ll get to that a little later in this post.

Work Folders as Experienced by an IT Admin

IT administrators can use Work Folders to gain more control over corporate data and user devices and centralize user work data so that they can apply the appropriate processes and tools to keep their company in compliance. This can range from simply having a copy of the data if the user leaves the company to a wide range of capabilities such as backup, retention, classification and automated encryption.

For example, when a user authors a sensitive document in Work Folders on their work PC, it gets synced to the file server. The file server then can automatically classify the document based on content, if configured using File Server Resource Manager, and encrypt the document using Windows Rights Management Services before syncing the document back to all the user’s devices. This allows a seamless experience for the user while keeping the organization in compliance and preventing leakage of sensitive information.

For more details about Work Folders deployment see the following blog: Deploying Work Folders in your lab and Channel 9 video

Work Folders Capabilities

Work Folders is part of the People-Centric IT pillar in Windows Server 2012 R2. This pillar includes other important capabilities such as Workplace Join, Web Application Proxy and Device management. While these capabilities are integrated, they are also independent so that you can use them as standalone solutions to get immediate value as you deploy each capability.

Our main design focus around Work Folders was to keep it simple for the Information Workers while allowing IT administrators to use the familiar low cost, high scale Windows file server with all the rich functionality available on the backend from high availability to comprehensive data management.

Here is some of the functionality that Work Folders includes:

  • Provide a single point of access to work files on a user’s work and personal PCs and devices (Windows 8.1 and Windows RT 8.1, with immediate plans to follow up with Windows 7 and iPad support and other devices likely in the future)
  • Access work files while offline and sync with the central file server when the PC or device next has Internet or network connectivity
  • Maintain data encryption in transit as well as at rest on devices and allow corporate data wipe through device management services such as Windows Intune
  • Use existing file server management technologies such as file classification and folder quotas to manage user data
  • Specify security policies to instruct user PCs and devices to encrypt Work Folders and use a lock screen password, for example
  • Use Failover Clustering with Work Folders to provide high-availability solution

I should mention a few scoping decisions that we made in this release so that we could complete Work Folders in the short release cycle for Windows Server 2012 R2:

  • Backend storage is provided by on-premise file servers and Work Folders must be stored in local storage on the file server (e.g.: data can be on local shares on a Windows Server 2012 R2)
  • Users sync to their own folder on the file server - there is no support for syncing arbitrary file shares (e.g.: sync the sales demos share to my device)
  • Work Folders doesn’t provide collaboration functionality such as sharing sync files or folders with other users (we recommend using SkyDrive Pro if you need document collaboration features)

How Work Folders Compares to Other Microsoft Sync Technologies

Finally, I’d like to discuss how Work Folders fits in with other sync solutions that Microsoft provides, mainly SkyDrive and SkyDrive Pro. As described above, Work Folders provides a solution for customers that prefer to use traditional Windows file servers as the backend storage for the corporate data synced from user’s devices. This would work well for organizations that are already using a home folders or folder redirection solution or customers that already have established practice for managing and storing user data on file servers.

For customers that use SharePoint, SkyDrive Pro is a great solution that provides additional functionality ranging from rich collaboration features to a cloud service availability using Office 365

The table below shows the different options:

 

Consumer / personal data

Individual work data

Team / group work data

Personal devices

Access protocol

Data location

SkyDrive

X

  

X

HTTPS

Public cloud

SkyDrive Pro

 

X

X

X

HTTPS

SharePoint / Office 365

Work Folders

 

X

 

X

HTTPS

File server

Folder Redirection / Client-Side Caching

 

X

  SMB (only from on-prem or using VPN)

File server

More Information

For more details about Work Folders, you can view the following presentations that are available online:

· Work Folders Overview on TechNet

· Deploying Work Folders in your lab

· Work Folders overview in TechEd 2013

· Work Folders deep dive in TechEd 2013

 

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

Work Folders Test Lab Deployment

$
0
0

Hi, everyone. I’m Jane Yan, a PM on the Work Folders team. I presented a session on Work Folders with Adam Skewgar, and demoed how it works both on the client and server (A Deep Dive into the New Windows Server Data Sync Solution). This blog post will show you step by step on how to build the demo environment shown in the sessions. Please note the guide is using the Preview release build, and the experience will differ slightly in the final RTM release.

Overview

Work Folders is a new feature introduced in Windows Server 2012 R2 that enables user access their work related files on the devices which has configured Work Folders, no matter whether the devices are joined to a domain or not, and whether the devices are connected directly to the corpnet or over the internet. Work Folders is available in Windows Server 2012 R2 Preview, and Windows 8.1 Preview. This step by step guide will use the Preview release for both server and the client.

Topology

The simplest setup for lab test of Work Folders requires the following computers or VMs:

  1. Active Directory Domain Services domain controller (DC)
  2. File server running Windows Server 2012 R2
  3. 2 client PCs running Windows 8.1 or Windows RT 8.1 (to observe documents sync between 2 devices)

In the lab testing, VMs are more convenient, I’ll provide the end to end setup using VMs . This test environment does not require you to publish any URLs for Work Folders.

Express lane

This section provides you a checklist on how to setup the lab environment, detailed procedures are covered in the later sections.

VM setup

If you are familiar with setting VMs, skip to the next section. By end of this section, you will have a domain setup with the server and one client machine joining to the domain.

Configure Network

In the Hyper-V Manager console, create a Virtual Switch marked as Private.

Configure the VMs to use the Private network.

DC setup

    1. Create a VM using Windows Server 2012 R2
    2. Rename the VM to DC.
    3. Configure the IP of the server as 10.10.1.10
    4. After the VM setup, open Server Manager, and then add the following roles:
  • Active Directory Domain Services
  • DHCP Server (Note: this role is optional. You can also configure static IP for each VM without enabling DHCP)
  • DNS Server
  • Complete the wizard, then click on promote DC link “Promote this server to a domain controller”
  • image

    1. Use the wizard to create a new forest as “Contoso.com”, and configure the DC appropriately.
    2. Add a new scope in DHCP, such that other machines on the network can get IP address automatically. Note: this is optional, you can also manually configure other machines with static IP.

    Server setup

    1. Create a VM using Windows Server 2012 RS.
    2. Rename the VM to SyncSvr.
    3. Join the SyncSvr machine to the domain Contoso.com

    Client setup

    1. Create 2 VMs using Windows 8.1
    2. Rename VM1 to OfficePC
    3. Rename VM2 to HomePC
    4. Join OfficePC to the contoso.com domain.

    User and Security group creation

    Work Folders can be configured to domain users, you need to create a few test users in the AD. For testing purposes, let’s create 10 domain users (U1 to U10).

    We recommend controlling access to Work Folders through security groups. Let’s create[n1] one group named “Sales”, with scope “Global” and type “Security”, and add the 10 domain users (U1 to U10) in the Sales security group.

    Sync Server configuration

    Now the fun starts. For all the operations performed on the server, I’ll show the UI through Server Manager, and followed by the equivalent Windows PowerShell cmdlet.

    Enabling the Work Folders role

    Using Server Manager UI

      1. Launch the Server Manager on SyncSvr.
      2. On the dashboard, click “Add roles and features”.
      3. Follow the wizard, on the Server Role selection page, choose Work Folders under File and Storage Services:

    Enable-Work-Folders-Role

    1. Complete the wizard.

     

    Using PowerShell cmdlet

    PS C:\> Add-WindowsFeature FS-SyncShareService

    Create Sync Share

    Using Server Manager UI

    A sync share is the unit of management on the sync servers. A sync share maps to a local path where all the user folders will be hosted under, and a group of users who can access the sync share.

    1. Launch New Sync Share Wizard from Server Manager

    2. Provide the local path where user folders will be created under, type C:\SalesShare, and then click next.clip_image005

    Note: There are 2 options to specify the local path:

    If you have a local path that is configured to be an SMB share, such as a folder redirection share, you can simply select the first option “Select by file share”. For example, as the screenshot shown above, I had one SMB share created on this server, which points to the C:\finshare location. I can simply enable the path “c:\finshare” for sync by select the first radio button.

    If it is a brand new server, and you only creating sync shares, you can provide the local path directly in the second option, which I’m using in the demo.

    Creating a sync share simply allows user to access the data hosted on the file server through the Sync protocol, in addition, the same data set can be accessed through SMB or NFS. The wizard makes it easy when creating the sync share, as you can pick the location by either knowing the local path or through a SMB (or NFS) share name. If you are enabling sync share first to a local path, I will also illustrate the steps to enable SMB to the same location, so the legacy client without Work Folders can access the data set through SMB.

    Sync share requires the local path to be hosted on NTFS volumes. If the local path is created as part of the UI wizard or cmdlet, the permissions will get inherited from the parent folder by default. After the wizard completes, additional permissions will be added to the local path to ensure users assigned to the sync share can create/access the folder/files under the user folder. The table below shows the minimum NTFS permissions required on the local path, and will be configured by the sync share creation:

     

    User account

    Minimum permissions required (configured by Sync Share setup)

    Creator/Owner

    Full control, subfolders and files only

    Security group of users needing sync to the share

    List Folder/Read data, Create Folders/Append data, Traverse folder/execute file, Read/Write attributes – this folder only

    Local system

    Full control, this folder, subfolders and files

    Administrator

    Read, this folder only

    Additional permissions may present on the local path as a result of inheritance, you need to make sure the user accounts listed in the table have the correct permissions after the sync share is created.

    3. Select the user folder format, choose the default user alias, and click Next.

    clip_image007

    Note: There are 2 options you can select from the UI:

    OptionsView in Explorer
    Using user alias. This is selected by default, and it is compatible with other technologies such as folder redirection or home folders.

    clip_image008

    Using alias@domain. This option ensures the uniqueness of the folder name for users across domains.clip_image009

     

    Sync only the following subfolder: By default, all the folders/files under the user folder will be synced to the devices. This checkbox allows the admin to specify a single subfolder to be synced to the devices. For example, the user folder might contain the following folders as part of a Folder Redirection deployment:

    clip_image010

    Admin can choose a subfolder “Document” as the folder to be synced to devices, and leaving other folders still functioning with Folder redirection. To do so, check “Sync only the following subfolder”

    clip_image011

    4. Provide the sync share name and description (optional), and click Next

    clip_image013

    5. Assign security groups for sync share access by clicking the Add button and entering the Sales security group (created in section User and Security group creation). Then click Next

    clip_image015

    Note: By default, the admin will not be able to access the user data on the server. If you want to have admin access to user data, uncheck the “Disable inherited permissions and grant users exclusive access to their files” checkbox.

    6. Define device policies, and then click Next.

    clip_image017

    Note: Encryption policies request that the documents in Work Folders on the client devices be encrypted with the Enterprise ID. The Enterprise ID by default is the user primary SMTP email address, (aka proxyAddresses of the user object in AD). Using a different key to encrypt Work Folders ensures that personal documents on the same device are preserved if an admin wipes Work Folders on the device (for example, if the device is stolen).

    The password policy enforces the following configuration on user PCs and devices:

     

    • Minimum password length of 6
    • Autolock screen set to be 15 minutes or less
    • Maximum password retry of 10 or less

    If the device doesn’t meet the policy, user will not be able to configure the Work Folders.

    7. Check the sync share settings, and click Create.

    clip_image019

    Using PowerShell cmdlet

    PS C:\>New-SyncShare SalesShare –path C:\SalesShare –User Contoso\Sales -RequireEncryption $true –RequirePasswordAutoLock $true

    Enable SMB access

    If you want to enable the sync share for SMB access, you can open the Windows Explorer, and navigate to the “This PC” location. Right click on the “SalesShare” folder, and select “Share with” -> “Specific people”. Add Contoso\Sales and change the permission level to “Read/Write”, as shown below:

    clip_image021

    Complete the UI by clicking on “Share” button.

    Now user can also access the dataset through UNC path.

    Note: once the server is enabled for SMB access, server will check for data changes every 5 minutes by default. You can decrease the enumeration time (such as to 1 minute) by running the following cmdlet on the server:

    PS C:\> Set-SyncServerSetting -MinimumChangeDetectionMins 1

    It increases the server load each time the server enumerates files to detect changes, on the other hand, the changes done locally on the server through SMB can only be detected at each enumeration time. It is a balance act to tolerate change detection delay and the load server can handle.

    Client setup

    Since we prepared 2 VMs as the client machines, you will need to repeat the following setup on both client machines.

    Lab testing specific settings

    Caution: The following regkey settings are only for lab testing, and should not be configured on any production servers.

    1. Allow unsecure connection

    By default, client always connect to the server using SSL, which requires the server to have SSL certificate installed and configured. In lab testing, you can configure the client to use http by running the following command on the client:

    Reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WorkFolders /v AllowUnsecureConnection /t REG_DWORD /d 1

    2. Converting from Email address to Server Url

    When user enters the email address, such as Jane@contoso.com, the client will construct the Url as https://WorkFolders.contoso.com, and use that Url to communicate with the server. In production environment, you will need to publish the Url for the client to communicate to the server through reverse proxy. In testing, we’ll bypass the Url publication by configure the following regkey:

    Reg add HKCU\Software\Microsoft\Windows\CurrentVersion\WorkFolders /v ServerUrl /t REG_SZ /d http://syncServer.contoso.com

    With this key set, the client will bypass the email address user entered, and use the Url in the regkey to establish the sync partnership.

    Also note that, this key will not be present in the RTM release.

    3. Change the client polling frequency

    By default, client device will poll for change to the server every 10 minutes if there is no local changes under the Work Folders. You can configure the following regkey to speed up the polling to 5 seconds:

    Reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\WorkFolders /v PollingInterval /t REG_DWORD /d 5

    WorkFolders setup

    1. User can find the setup link in Control Panel->System and Security->Work Folders

    clip_image023

    2. Provide the user email address, and then click Next.

     

    clip_image025

    Note: If the client machine is domain joined, user will not be prompted for credentials.

    3. Specify where to store Work Folders on the device clip_image027

    Note: Users cannot change the Work Folder location in the preview release of Windows 8.1. This will be changed in the final RTM release.

    4. Consent to the device policy, and then click Setup Work Folders.

    clip_image029

    Work Folders is now configured on the device. You can open File Explorer to see Work Folders.

    Work-Folders-on-client-

    Once you have configured both client machines, user can access the documents under the Work Folders location from any devices, and the documents will be kept in sync by Work Folders.

    Sync in action

    To test Work Folders, create a document (using Notepad or any other app) on one of the client machines and save the document under the Work Folders location. In a few moments, you should see the document get synced to the other client machine

    clip_image034

    Since the sync location was also enabled with SMB access, user can also view the data on computers without Work Folders by typing the UNC path in the explorer:

    clip_image036

    Conclusion

    I hope this blog post helps you get started with Work Folders in your test labs. If you have questions not covered here, please raise it in the comments so that I can address it with upcoming postings. Also, there are some resources on this topic you will find helpful:

     

    Powershell cmdlets references: http://technet.microsoft.com/en-us/library/dn296644(v=wps.630).aspx

     

    - Jane

    Storage and File Services Powershell Cmdlets Quick Reference Card For Windows Server 2012 R2 [Preview Edition]

    $
    0
    0

    Hi, my name is Roiy Zysman and I’m a senior Program Manager in the Hybrid Storage Services team.

    Last year, with the introduction of Windows Server 2012, we published a File and Storage Services PowerShell cmdlets quick reference sheet.
    The motivation for this reference card was to provide a set of common Windows PowerShell cmdlet examples that span across different File and Storage Services modules to help simplify performing common tasks.

    To further explain how this reference card can be used in real life, consider the following scenario:
    Amy, the file server administrator, is asked to create a new SMB share for the HR team. While she knows she can accomplish this task using Server Manager’s wizards and dialogs, she prefers to perform this task by running a set of commands on her Windows PowerShell console. From her previous experiences, she knows that this task would require her to execute cmdlets from several different File Services related modules. She’ll probably have to start with Storage cmdlets to provision pools, disks and a volume. Then she’ll use the SMB cmdlets that create an SMB share, followed by the File Server Resource Manager cmdlets to apply quotas, and Automatic Classification and Access-Denied Assistance settings. She then might use the DFS Namespaces cmdlets to add the new share to the org’s shares namespaces. Lastly, she’d probably optimize the storage capacity by using the Data Deduplication cmdlets to turn on data deduplication on the newly created volume. There are even more File and Storage Services modules that might be included in this scenario such as iSCSI Target Server, iSCSI Initiator as well as Failover Clustering if the request demands a robust and highly available file access solution.

    Remembering every cmdlet is not trivial, so to ease up Amy’s job and other file server admins out there, we’ve collected and organized a set of File and Storage Services-related cmdlets into a printable quick reference guide that you can hang on your wall, place it on your desk, or fold up and store in your coat pocket for impressing people at parties.

    We had a lot of feedback from customers that liked the previous version of our reference sheet, so we went ahead and produced a new reference card for the preview version of Windows Server 2012 R2.
    In the new reference card there are two new sections: one for DFS Replication and one for the Work Folders – a new sync technology for Windows Server 2012 R2. It is also worth mentioning that there are new storage cmdlets to manage storage tiers.

    We hope you’ll find the updated version useful and we encourage you to download and print or save a copy to use while you're exploring the File and Storage Services world.

    [Download here]

     

     

    Thanks
    Roiy Zysman

     

     

    Deploying Data Deduplication for VDI storage in Windows Server 2012 R2

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Data Deduplication and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “Delivering Infrastructure as a Service (IAAS).”

    With the Windows Server 2012 R2 Preview, Data Deduplication is extended to the remote storage of the VDI workload:

     

    CSV Volume support

     

    Faster deduplication of data

     

    Deduplication of open (in use) files

     

    Faster read/write performance of deduplicated files

     

    See http://blogs.technet.com/b/filecab/archive/2013/07/31/extending-data-deduplication-to-new-workloads-in-windows-server-2012-r2.aspx for more details.

    Why do I want to use Data Deduplication with VDI?

    To start with: You will save space! Deduplication rates for VDI deployments can range as high as 95% savings. Of course that number will vary depending on the amount of user data, etc and it will also change over the course of any one day.

    Data Deduplication optimizes files as a post processing operation. That means, as data is added over the course of a day, it will not be optimized immediately and take up extra space on disk. Instead, the new data will be processed by a background deduplication job. As a result, the optimization ratio of a VDI deployment will fluctuate a bit over the course of a day, depending on home much new data is added. By the time next optimization is done, savings will be high again.

    Saving space is great on its own, but it has an interesting side effect. Volumes that were always too small, but had other advantages are suddenly viable. One such example are SSD volumes. Traditionally, you had to deploy very many of these drives to reach volume sizes that were viable for a VDI deployment. This was of course expensive for the disks, but also considering the increased needs for JBODs, power, cooling, etc. With Data Deduplication in the picture SSD based volumes can suddenly hold vastly more data and we can finally utilize more of their IO capabilities without incurring additional infrastructure costs.

    On the other hand, due to the fact that Data Deduplication consolidates files, more efficient caching mechanisms are possible. This results in improving the IO characteristics of the storage subsystem for some types of operations.

    As a result of these, we can often stretch the VM capacity of the storage subsystem without buying additional hardware or infrastructure.

    How do I deploy VDI with Data Deduplication in Windows Server 2012 R2 Preview then?

    This turns out to be relatively straight forward, assuming you know how to setup VDI, of course. The generic VDI setup will not be covered here, but rather we will cover how Data Deduplication changes things. Let’s go through the steps:

    1. Machine deployment

    First and foremost, to deploy Data Deduplication with VDI, the storage and compute responsibilities must be provided by separate machines.

     

    The good news is that the Hyper-V and VDI infrastructure can remain as it is today. The setup and configuration of both is pretty much unaltered. The exception is that all VHD files for the VMs must be stored on a file server running Windows Server 2012 R2 Preview. The storage on that file server may be directly attached disks or provided by a SAN/iSCSI.

    In the interest of ensuring that storage stays available, the file server should ideally be clustered with CSV volumes providing the storage locations for the VHD files.

    2. Configuring the File Server

    Create a new CSV volume on the File Server Cluster using your favorite tool (we would suggest System Center Virtual Machine Manager). Then enable Data Deduplication on that volume. This is very easy to do in PowerShell:

    Enable-DedupVolume C:\ClusterStorage\Volume1 –UsageType HyperV

    This is basically the same way Data Deduplication is enabled for a general file share, however it ensures that various advanced settings (such as whether open files should be optimized) are configured for the VDI workload.

    In the Windows Server 2012 R2 Preview one additional step has to be done that will not be required in the future. The default policy for Data Deduplication is now to only optimize files that are older than 3 days. This of course does not work for open VHD files since they are constantly being updated. In the future, Data Deduplication will address this by enabling “Partial File Optimization” mode, in which it optimizes parts of the file that are older than 3 days. To enable this mode in the Preview, run the following command

    Set-DedupVolume C:\ClusterStorage\Volume1 –PartialFileOptimization

    3. VDI deployment

    Deploy VDI VMs as normal using the new share as the storage location for VHDs.

    With one caveat.

    If you made a volume smaller than the amount of data you are about to deploy on it, you need some special handling. Data Deduplication runs as a post-processing operation.

    Let us say we want to deploy 120GB of VHD files (6 VHD files of 20GB each) onto a 60 GB volume with Data Deduplication enabled.

    To do this, deploy VMs onto the volume as they will fit leaving at least 10GB of space available. In this case, we would deploy 2 VMs (20GB + 20GB + 10GB < 60GB). Then run a manual deduplication optimization job:

    Start-DedupJob C:\ClusterStorage\Volume1 –Type Optimization

    Once this completes, deploy more VMs. Most likely, after the first optimization, there will be around 10GB of space used. That leaves room for another 2 VMs. Deploy these 2 VMs and repeat the optimization run.

    Repeat this procedure until all VMs are deployed. After this the default background deduplication job will handle future changes.

    4. Ongoing management of Data Deduplication

    Once everything is deployed, managing Data Deduplication for VDI is no different than managing it for a general file server. For example, to get optimization savings and status:

    Get-DedupVolume | fl
    Get-DedupStatus | fl
    Get-DedupJob

    It may at times occur that a lot of new data is added to the volume and the standard background task is not able to keep up (since it stops when the server gets busy). In that case you can start a “throughput” optimization job that will simply keep going until the work is done:

    Start-DedupJob D: –Type Optimization

    Wrap-up

    Overall, deploying Data Deduplication for VDI is relatively simple operation, though it may require some additional planning along the way.

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.



    Extending Data Deduplication to new workloads in Windows Server 2012 R2

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Data Deduplication and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “Delivering Infrastructure as a Service (IAAS).”  

    In Windows Server 2012 we introduced the new Data Deduplication feature set that quickly became one of standard things to consider when deploying file servers. More space on existing hardware at no cost other than running Windows Server 2012? Seems like a pretty good deal.

    Not to mention we saw great space savings on various types of real-world data at rest. Some of the most common types of data include:

     

    These numbers are based on measuring the savings rates on various customer deployments of Data Deduplication on Windows Server 2012. However, we saw some interesting trends:

    • Customers were adjusting the default policies as to which files to optimize to include more data. By default, Data Deduplication only optimizes files that have not been modified in 5 days. Customers were setting it to optimize files older than 3 days and in many cases to optimize all files regardless of age.
    • Customers were attempting to optimize their running VHD libraries… which of course doesn’t quite work correctly

    In both cases we see people try to put more data under Data Deduplication and to take better advantage of those huge savings seen on static VHD libraries. However, Data Deduplication in Windows Server 2012 was not really designed to deal with data that changes frequently or even is in active use.

    The road to new workloads for Data Deduplication

    The customer feedback we were getting showed a clear need to reduce storage costs in private clouds (see http://blogs.technet.com/b/in_the_cloud/archive/2013/07/31/what-s-new-in-2012-r2-delivering-infrastructure-as-a-service.aspx for an overview of all the other new things around storage) and specifically to extend Data Deduplication for new workloads.

    Specifically we needed to start supporting storage of live VHDs for some scenarios.

    It turns out that there were a few key changes that had to be made to even consider using Data Deduplication for open files:

    • The read performance was pretty good already, but the write performance needed to be improved.
    • The speed at which Data Deduplication optimizes files needed to become faster to keep up with changes (churn) in files.
    • We had to allow open files to be optimized by Data Deduplication (while it was actively being modified)

    We also realized that all of this would take up resources on the server running Data Deduplication. If we were to run this on the same server as the VMs, then we’d be competing with them for resources. Especially memory. So we quickly came to the conclusion that we needed to separate out storage and computation nodes when Data Deduplication was involved with virtualization.

    Of course that meant we had to use a scale out file share and therefore needed to support CSV volumes for deduplication.

    Then we came to the question of how fast do we have to get all of these things working to be successful? Well… as fast as possible. However, we know that Data Deduplication has to incur some costs. So we needed real goals. It turns out that deciding that you are fast enough for all virtualization scenarios is very difficult. So we decided to take a first step with a virtualization workload that was well understood:

    Data Deduplication in Windows Server 2012 R2 would support optimization of storage for Virtual Desktop Infrastructure (VDI) deployments as long as the storage and compute nodes were connected remotely.

    What’s new in Data Deduplication in Windows Server 2012 R2 Preview

    With the Windows Server 2012 R2 Preview, Data Deduplication is extended to the remote storage of the VDI workload:

     

    CSV Volume support
     
    Faster deduplication of data
     
    Deduplication of open (in use) files
     
    Faster read/write performance of deduplicated files

     

    Is Hyper-V in general supported with a Deduplicated volume?

    We spent a lot of time to ensure that Data Deduplication performs correctly on general virtualization workloads. However, we focused our efforts to ensure that the performance of optimized files is adequate for VDI scenarios. For non-VDI scenarios (general Hyper-V VMs), we cannot provide the same performance guarantees.

    As a result, we do not support deduplication of arbitrary in use VHDs in Windows Server 2012 R2. However, since Data Deduplication is a core part of the storage stack, there is no explicit block in place that prevents it from being enabled on arbitrary workloads.

    What benefits do we get from using Data Deduplication with VDI?

    We will start with the easy one: You will save space! And of course, saving space translates into saving money. Deduplication rates for VDI deployments can range as high as 95% savings. This allows for deployments of SSD based volumes for VDI, leveraging all the improved IO characteristics while mitigating their low capacity.

    This also allows for simplification of the surrounding infrastructure such as JBODs, cooling, power, etc.

    On the other hand, due to the fact that Data Deduplication consolidates files, more efficient caching mechanisms are possible. This results in improving the IO characteristics of the storage subsystem for some types of operations. So not only does deduplication save money, it can make things go faster.

    As a result of these, we can often stretch the VM capacity of the storage subsystem without buying additional hardware or infrastructure.

    Wrap-up

    Data Deduplication in Windows Server 2012 R2 enables optimization of live VHDs for the VDI workloads and allows for deduplicated CSV volumes. It also significantly improves the performance of optimization as well as IO on optimized files. This will allow better utilization of existing storage subsystems for general file servers as well as for VDI storage and simplify future infrastructure investments.

    We hope you find these new capabilities as exciting as we find them and look forward to hearing from you.

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

    What’s new for SMI-S in Windows Server 2012 R2

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers storage management through SMI-s and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: What’s New in 2012 R2: IaaS Innovations.”

     

     

    Although my day job is no longer working on SMI-S, I did want to let folks know that a lot of work went into the Standards-Based Storage Management Service (aka Storage Service) in the upcoming Windows Server release. Here are the highlights:

    Discovery - Discovery is what happens when you register (Register-SmisProvider) or update (Update-StorageProviderCache): the storage service tries to find out as much information as you wanted (there are four levels of discovery) and that information resides in the service’s cache so you don’t need to constantly go out to the provider to get the data. In Windows Server 2012, the way the information was discovered was through “walking the model”, which is to say, starting with an object and then following associations to get additional information. Unfortunately, this can take a long time on all but the smallest storage configurations.

    For Windows Server 2012 R2, the mechanisms have been changed. Instead of model-walking, the service will do enumerations of objects and then figure out (in memory) how they inter-relate. This turns out to decrease discovery times up to 90%! We worked with vendors to make sure providers will work well with this change, and for some of those vendors, you will need to get updated providers.

    Updates (through Indications) - I recently blogged about the indication support for SMI-S. The internals of this are much improved in Windows Server 2012 R2 and more provider changes will be caught through indications so that rediscovery won’t be needed as often. The information in the older blog still applies, except that the firewall rule will already be in place (the provided script will still run unchanged).

    Secure connections (using HTTPS) - In my first posting, I advised against using Mutual Authentication with the storage service. For Windows Server 2012 R2, this has been improved. This applies to indications as well as normal SMI-S traffic. Follow the Indications blog for configuration information (you need to have the certificates in place). Not all providers will work well with mutual auth. I will post more about security in the near future.

    Resiliency Settings - When creating pools and volumes using the Storage Management cmdlets, you could easily specify various parameters that were just never going to work. SMI-S is too generic here. This has been simplified - stick with vendor defined settings and you should be fine. For Windows Server 2012 R2, we’ll give you an error if you try to override any of the parameters.

    Pull Operations - One of the efficiency improvements is to change how enumerations are done by using a newer mechanism called “pull” operations. (You can read more about this here.) This allows chunking of the data coming back from the provider, and generally will lower the memory required on the provider side for large configurations. This works now with EMC providers; others will update in the future. To enable pull operations, you will need to modify the registry value

    HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\Current Version\Storage Management\PullOperationCount

    Set it to something like 100 which tells the provider to send 100 instances of a particular class at a time. Through PowerShell, this would look like:

    001
     Set-ItemProperty -Path "HKLM:SOFTWARE\Microsoft\Windows\CurrentVersion\Storage Management" -Name PullOperationCount -Value 100

    To turn it off, just set the value back to 0. (Leaving it enabled for a provider that does not support pull operations will only cause a small performance degradation so you might want to leave this set if you have more than one provider and either supports pull operations.)

    Registering a provider - You can now register a provider that does not have any arrays to manage (yet). An Update-StorageProviderCache cmdlet will find any new devices at a later time. More significantly, if for some reason the provider lost contact with an array, this would result in errors that could be difficult to recover from.

    Node/Port address confusion- In Windows Server 2012, we had these defined backwards from what is mandated by standards or used by things like the iSCSI initiator. This has been corrected but may require provider updates because we found some bugs in existing implementations.

    Snapshot/Volume deletion - Under some conditions, it would not be possible to delete a volume or a snapshot. For example, we sometimes thought a volume was exposed to a host when it wasn’t, or a snapshot was in the wrong state for deletion. This has been improved.

    Snapshot target pool - Specifying a -TargetStoragePoolName for snapshots is now supported, that is, if the provider/array allows it. However, be careful when you have more than one array with pools of the same name (which might be common).

    Masking operations. There have been cases where masking/unmasking operations can take a long time to complete, particularly when SC VMM issues multiple requests in parallel; these could result in timeouts. Also with the current SMI-S model, masking operations performed by Windows might require multiple method calls to providers. Windows Server 2012 R2 now uses jobs for such operations instead of making all masking operations synchronous.

     

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

    iSCSI Target Server in Windows Server 2012 R2

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers iSCSI Target Server and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: What’s New in 2012 R2: IaaS Innovations.”

     

    iSCSI Target Server made its first in-box appearance in Windows Server 2012 as a feature, from being a separate downloadable in prior releases. Now in Windows Server 2012 R2, iSCSI Target Server ships with two sets of very cool feature enhancements (technically, note that this blog post applies specifically to Windows Server 2012 R2 Preview and is subject to change in future releases).

    At the high-level, the two sets of enhancements are:

    1. Virtual Disk Enhancements: Larger, resilient, dynamically-growing SCSI Logical Units (LUs) on iSCSI Target Server
    2. Manageability Enhancements: Private and Hosted Cloud management using SCVMM and iSCSI Target Server-based storage

    Let us go into more detail on each of these below. My objectives with this blog post are two: to help you become familiar with new functionality, and to make you comfortable to quickly using the new iSCSI Target Server hands-on.

    Virtual Disk Enhancements

    Overview

    Let’s start with a quick recap of what a Windows Server iSCSI Target ‘Virtual Disk’ is. Most of you may already remember that the iSCSI Target Server implementation calls its inventory of storage units as “Virtual Disks” – when these Virtual Disks are provisioned and then assigned to an iSCSI Target, the disks become accessible to iSCSI initiators as ‘SCSI Logical Units’ (LUs). iSCSI Target Server administrator can of course control access to the iSCSI Target to allow only certain iSCSI initiators to access it. Application-consistent snapshots are easy to take with the VSS provider that installs on the initiator side. Finally, the administrator can also create multiple iSCSI Targets under the same iSCSI Target Server. Here then is a quick reference to all these steps:

    The good news is that this conceptual framework remains unchanged in Windows Server 2012 R2 although it packs a powerful set of infrastructure changes under the covers. Here is a pictorial that summarizes the Virtual Disk enhancements in the core stack.

     

    WS2012R2 iSCSI Target Architecture

     

    So yes, the big news is that iSCSI Target Server switched to VHDX (VHD 2.0) format in Windows Server 2012 R2. Further, iSCSI Virtual Disks can now also be built off Dynamic VHDX virtual disks. iSCSI snapshots are also supported for Dynamic VHDX-based Virtual Disks.

    In addition to these, our File and Storage Services UI team has done an excellent job in taking advantage of these new enhancements, such that you can simply use Server Manager to start playing with these new features right away. Here is a screen shot of the ‘New iSCSI Virtual Disk’ wizard that perhaps illustrates the enhancements the best:

     

    WS2012 R2 iSCSI Virtual Disk Wizard

     

    We have also made a couple of other key enhancements:

    • Unlike in Windows Server 2012 where iSCSI Target Server always sets FUA on back-end I/Os, iSCSI Target Server now enables Force Unit Access (FUA) on its back-end virtual disk I/O only if the front-end I/O that the iSCSI Target received from the initiator required such a direct medium access. This has the potential to improve performance, assuming of course you have FUA-capable back-end disks or JBODs behind the iSCSI Target Server.
    • Local Mount functionality of iSCSI Virtual Disk snapshots – i.e. Mount-IscsiVirtualDiskSnapshot and Dismount-IscsiVirtualDiskSnapshot– are now deprecated in Windows Server 2012 R2. These cmdlets were typically used for locally mounting a snapshot for target-local usage, e.g. backup. Turns out there is actually a simpler approach to avoid the deprecated functionality – use Export-IscsiVirtualDiskSnapshot cmdlet to create an associated Virtual Disk, and access it through a target-local initiator which can then back up the disk. This simpler approach is feasible because iSCSI Target Server now supports “loopback initiator”, basically initiator and target can both be on the same computer.
    • Maximum number of sessions/target is now increased to 544, and the maximum number of LUs/target has gone up to 276.
    How to get started?

    Good news is that existing “iSCSI Target Block Storage, How To” TechNet guidance for Windows Server 2012 remains unchanged for this release. Follow the same steps documented for setting up the iSCSI Target Server feature. No additional configuration is required for enabling the functionality to exercise this scenario. iSCSI Target Server is ready to use the VHDX format!

    You can either use the Server Manager à File and Storage Services à iSCSI GUI (refer to the previous screen shot), or use the iSCSI Target PS cmdlets documented on TechNet for Windows Server 2012: iSCSI Target Cmdlets in Windows PowerShell

    The most significant difference in Windows Server 2012 R2 is perhaps in New-iSCSIVirtualDisk cmdlet usage. Here is the syntax help for New-iSCSIVirtualDisk:

    New-IscsiVirtualDisk [-Path] <string> [-SizeBytes] <uint64>

    [-Description <string>] [-LogicalSectorSizeBytes <uint32>]

    [-PhysicalSectorSizeBytes <uint32>] [-BlockSizeBytes <uint32>]

    [-ComputerName <string>] [-Credential <pscredential>]

    [<CommonParameters>]

    New-IscsiVirtualDisk [-Path] <string> [[-SizeBytes] <uint64>]

    -ParentPath <string> [-Description <string>]

    [-PhysicalSectorSizeBytes <uint32>] [-BlockSizeBytes <uint32>]

    [-ComputerName <string>] [-Credential <pscredential>]

    [<CommonParameters>]

    New-IscsiVirtualDisk [-Path] <string> [-SizeBytes] <uint64>

    -UseFixed [-Description <string>] [-DoNotClearData]

    [-LogicalSectorSizeBytes <uint32>]

    [-PhysicalSectorSizeBytes <uint32>] [-BlockSizeBytes <uint32>]

    [-ComputerName <string>] [-Credential <pscredential>]

    [<CommonParameters>]

    A handful of things worth noting here to start heading in the right direction:

    • Be sure to use “.vhdx” file extension for the file name, “Path” parameter. VHDX is the only supported format for newly-created iSCSI Virtual Disks!
    • Note that the default iSCSI Virtual Disk persistence format will be Dynamic VHDX. You will need to use the “UseFixed” parameter if you need the Fixed VHDX format.
    • By default, fixed VHDX format zeroes out the virtual disk file on allocation. Recognizing that previous iSCSI Target Server version provided a non-zeroing fixed VHDX behavior, we have provided an option in this release to maintain functional parity through ‘DoNotClearData’ parameter. However, keep in mind that by using this parameter, you may accidentally expose non-zeroed data that you may not want to – so avoid using it if you can! That is the reason Microsoft is no longer making this the default behavior either in GUI or in cmdlets.
    • You will quickly notice is that the new cmdlet now supports a number of new cmdlet parameters – e.g. PhysicalSectorSizeBytes – that are supported by New-VHD cmdlet, this is a direct benefit of the redesigned back-end persistence layer based on VHDX format.
    You should be able to
    • Create large iSCSI Virtual disks (up to 64 TB) using Windows PowerShell and/or GUI
    • Run your desired workloads – including SQL, diskless boot, cluster shared storage – on iSCSI block storage
    • Create dynamic virtual disks and assign them to an iSCSI target

    Manageability Enhancements

    Overview

    Let me again start with a recap on the SMI-S provider, which is the most significant enhancement in this group. iSCSI Target Server had shipped a Windows Server 2012-compatible SMI-S provider with System Center Virtual Machine Manager (SCVMM) 2012 SP1. This required installation from SCVMM media onto an iSCSI Target Server computer, not to mention some additional configuration steps detailed on SCVMM storage configuration guidance on TechNet. Windows Server 2012 R2 now significantly improves the SMI-S provider, and brings it into in-box.

    WS2012 R2 iSCSI Target with SCVMM

     

     

    As shown in the diagram above, SMI-S provider ships and installs as part of the iSCSI Target Server. No separate configuration steps are necessary, SMI-S provider process is auto-instantiated on demand. SMI-S provider presents a different, standards-compliant management object model to SCVMM, but it transparently uses the same WMI provider that Windows PowerShell cmdlets use. SMI-S provider in Windows Server 2012 R2 is designed for dual active iSCSI target clusters whereas the previous version was limited to active-passive clusters. The new SMI-S provider also supports asynchronous job management for long-running Create/Expand/Restore jobs – so these are also cancelable from SCVMM. The following screen shot shows the storage manageability view from SCVMM for an iSCSI Target Server.

     

    WS2012 R2 SCVMM Storage Fabric

     

    Additionally, iSCSI Target Server made a few other key manageability enhancements in Windows Server 2012 R2.

    • Online resizing – grow or shrink – of an iSCSI Virtual Disk is now supported via the new Resize-iSCSIVirtualDisk cmdlet. Previously, only online expansion was supported via Expand-IscsiVirtualDisk cmdlet. Syntax of Resize remains the same as that of Expand. And the Expand cmdlet continues to work as well.
    • Asynchronous long-running operations are supported via cmdlets:
    • You can optionally disable remote management of an iSCSI Target Server using the newly-introduced ‘DisableRemoteManagement’ parameter on the Set-IscsiTargetServerSetting. This would be valuable in scenarios where you are embedding iSCSI Target Server in an appliance with a different management wrapper, and you do not want iSCSI manageability end point to be directly externally exposed.
    • Two new cmdlets are now added to simplify migration experience to Windows Server 2012 R2 – Export-iSCSITargetServerConfiguration and Import-iSCSITargetServerConfiguration.
    How to get started?

    Good news is that existing “iSCSI Target Block Storage, How To” TechNet guidance for Windows Server 2012 remains unchanged for this release. Follow the same steps documented for setting up the iSCSI Target Server feature. No additional configuration is required for enabling the functionality to exercise this scenario. SMI-S provider for iSCSI Target Server is automatically installed and ready for usage. Use the existing TechNet documentation for configuring SCVMM 2012 SP1 or later versions to work with iSCSI Target Server, see Configuring an SMI-S Provider for iSCSI Target Server The only change from that page is that the SMI-S provider is no longer required to be installed from SCVMM media, it is included in-box in Windows Server 2012 R2. Note that the version of SCVMM Server used for SMI-S manageability must be SCVMM 2012 SP1 or SCVMM 2012 R2 – the management server itself could run either on a Windows Server 2012 R2 server or a Windows Server 2012 server.

    Here is the syntax for Stop-IscsiVirtualDiskOperation cmdlet – you can reference the iSCSI Virtual Disk either via the Path parameter, or through the Virtual Disk object retrieved from a different cmdlet such as Get-IscsiVirtualDisk:

    Stop-IscsiVirtualDiskOperation [-Path] <string> [-ComputerName <string>]

    [-Credential <pscredential>] [<CommonParameters>]

    Stop-IscsiVirtualDiskOperation -InputObject <IscsiVirtualDisk>

    [-ComputerName <string>] [-Credential <pscredential>]

    [<CommonParameters>]

    You can use Export-iSCSITargetServerConfiguration with the following syntax. When you run the Export from Windows Server 2012 R2 to export configuration from a down-level OS version computer, be sure to use the ‘ComputerName’ parameter to point to the right down-level computer.

    Export-IscsiTargetServerConfiguration [-Filename] <string>

    [[-ComputerName] <string>] [[-Credential] <string>] [-Force]

    [<CommonParameters>]

    And here is the syntax for Import-iSCSITargetServerConfiguration:

    Import-IscsiTargetServerConfiguration [-Filename] <string>

    [[-ComputerName] <string>] [[-Credential] <string>] [-Force]

    [<CommonParameters>]

    Note that as with the previous PS script, Export/Import operations do not migrate CHAP settings for security reasons – because we do not want to export an encrypted password in clear. So you will need to note passwords through some other means, and re-configure them on the destination computer after finishing the Import operation. Expect a complete migration guide to be available on TechNet in the very near future that will go into a lot more detail on this topic.

    You should be able to
    • Discover iSCSI Target in-box SMI-S provider through SCVMM’s storage automation
    • Carve out storage virtual disks on iSCSI Target, and provision them to a Hyper-V host
    • Create long-running asynchronous jobs for large (>= ~5TB) virtual disk creation, and monitor the status
    • Create multiple Continuously Available (CA) iSCSI target server instances in a single failover cluster using Failover Cluster Manager, and manage them all using a single instance of SCVMM
    • Migrate your existing Windows Server 2008 R2 or Windows Server 2012-based iSCSI Target Server to Windows Server 2012 R2

    Wrap-up

    iSCSI Target Server ships with a lot of exciting new functionality in Windows Server 2012 R2. Here are a couple of other useful pointers:

    iSCSI Target Block Storage

    What’s New for iSCSI Target Server in Windows Server 2012 R2

    iSCSI Target Cmdlets in Windows Server 2012

    To conclude, hope this blog post gave you some insight and familiarity to start using the new iSCSI Target Server right away! Drop me an offline note using the blue “Contact” button on this page with your feedback on how it’s going.

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

    DFS Replication in Windows Server 2012 R2: Revenge of the Sync

    $
    0
    0

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers DFSR and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “What’s New in 2012 R2: IaaS Innovations.”

    Hi folks, Ned here again. You might have read my post on DFSR changes in Windows Server 2012 back in November and thought to yourself, “This is ok, but come on… this took three years? I expected more.”

    We agreed.

    Background

    Windows Server 2012 R2 adds substantial features to DFSR in order to bring it in line with modern file replication scenarios for IT pros and information workers on enterprise networks. These include database cloning in lieu of initial sync, management through Windows PowerShell, file and folder restoration from conflicts and preexisting stores, substantial performance tuning options, database recovery content merging, and huge scalability limit changes.

    1

    Today I’ll talk at a high level about how your business can benefit from this improved architecture.It assumes that you have a previous working knowledge of DFSR, to include basic replication concepts and administration using the previous tools DfsMgmt.msc or DfsrAdmin.exe and Dfsrdiag.exe. Everything I discuss below you can do right now with the Windows Server 2012 R2 Preview.

    I have a series of deeper articles in the pipeline as well to get you rolling with more walkthroughs and architecture, as well as plenty of TechNet for the blog-a-phobic.

    Database Cloning

    DFSR Database Cloning is an optional alternative to the classic initial sync process introduced in Windows Server 2003 R2. DFSR spends most of its time in initial sync—even when administrators preseed files on the peer servers—examining metadata, staging files, and exchanging version vectors. This can make setup, disaster recovery, and hardware replacement very slow. Multi-terabyte data sets are typically infeasible due to the extended setup times; the estimate for a 100TB dataset is 159 days to complete initial sync on a LAN, if performance is linear (spoiler alert: it’s not).

    DB cloning bypasses this process. At a high level, you:

    1. Build a primary server with no partners (or use an existing server with partners)

    2. Clone its database

    3. Preseed the data on N servers

    4. Build N servers using that database clone

    The existing initial sync portion of DFSR is now instantaneous if there are no differences. If there are differences, DFSR only has to catch up the real delta of changes as part of a shortened initial sync process.

    Cloning provides three levels of file validation during the export and import processing. These ensure that if you are allowing users to alter data on the upstream server while cloning is occurring, files are later reconciled on the downstream.

    • None - No validation of files on source or destination server. Fastest and most optimistic. Requires that you preseed data perfectly and do not allow any modification of data during the clone processing on either server.
    • Basic - (Default behavior). Hash of ACL stored in the database record for each file. File size and last modified date-time stored in the database record for each file. Good mix of fidelity and performance.
    • Full - Same hashing mechanism used by DFSR during normal operations. Hash stored in database record for each file. Slowest but highest fidelity (and still faster than initial sync)

    Some early test results

    What does this mean in real terms? Let’s look at a test run with 10 terabytes of data in a single volume comprising 14,000,000 files:

    “Classic” initial sync

    Time to convergence

    Preseeded

    ~24 days

    Now, with DB cloning:

    Validation Level

    Time to export

    Time to import

    Improvement %

    2 – Full

    9 days, 0 hours

    5 days, 10 hours

    40%

    1 – Basic

    7 hours, 35 minutes

    3 hours, 14 minutes

    98%

    0 – None

    1 hour, 19 minutes

    2 hours, 49 minutes

    99%

    With the recommended Basic validation, we’re down to 11 hours! Our 64TB tests with 70 million files only take a few days! Our 500GB/100,000 file small-scale tests finish in 4 minutes! I like exclamation points!

    The Export-DfsrClone provides sample robocopy command-line at export time. You are free to preseed data any way you see fit (backup and restore, robocopy, removable storage, snapshot, etc.) as long as the hashes match and the file security/data stream/alternate data stream copy intact between servers.

    You manage this feature using Windows PowerShell. The cmdlets are:

    Export-DfsrClone
    Import-DfsrClone
    Get-DfsrCloneState
    Reset-DfsrCloneState

    I have a separate post coming with a nice walk through on this feature.

    Wait – did I say DFSR Windows PowerShell? Oh yeah.

    Windows PowerShell and WMIv2

    With Windows Server 2012 and prior versions, file server administrators do not have modern object-oriented Windows PowerShell cmdlets to create, configure and manage DFS Replication. While many of the existing command line tools provide the ability to administer a DFS Replication server and a single replication group, building advanced scripting solutions for multiple servers often involves complex output file parsing and looping.

    Windows Server 2012 R2 adds a suite of 42 Windows PowerShell cmdlets built on a new WMIv2 provider. Businesses benefit from a complete set of DFSR Windows PowerShell cmdlets in the following ways:

    1. Allows the switch to modern Windows PowerShell cmdlets as your “common language” for managing enterprise deployments.

    2. Can develop and deploy complex automation workflows for all stages of the DFSR life cycle, including provisioning, configuring, reporting and troubleshooting.

    3. Allows creation of new graphical or script-based wrappers around Windows PowerShell to replace use of the legacy DfsMgmt snap-in, without the need for complex API manipulation.

    List all DFSR cmdlets

    To examine the 42 new cmdlets available for DFSR:

    PS C:\> Get-Command –Module DFSR

    For further output and explanation, use:

    PS C:\> Get-Command –Module DFSR | Get-Help | Select-Object Name, Synopsis | Format-Table -Auto

    We made sure to document every single DFSR Windows PowerShell cmdlet online with more than 80 sweet examples, before RTM!

    Create a new two-server replication group and infrastructure

    Just to get your juices flowing, you can use DFSR Windows PowerShell to create a simple two-server replication configuration using the F: drive, with my two sample servers SRV01 and SRV02:

    PS C:\> New-DfsReplicationGroup -GroupName "RG01" | New-DfsReplicatedFolder -FolderName "RF01" | Add-DfsrMember -GroupName "RG01" -ComputerName SRV01,SRV02

    PS C:\> Add-DfsrConnection -GroupName "RG01" -SourceComputerName SRV01 -DestinationComputerName SRV02

    PS C:\> Set-DfsrMembership -GroupName "RG01" -FolderName "RF01" -ContentPath "F:\RF01" -ComputerName SRV01 -PrimaryMember $True

    PS C:\> Set-DfsrMembership -GroupName "RG01" -FolderName "RF01" -ContentPath "F:\RF01" -ComputerName SRV02

    PS C:\> Get-DfsrMember | Update-DfsrConfigurationFromAD

    Some slick things happening here, such as creating the RG, RF, and members all in a single step, only having to run one command to create connections in both directions, and even polling AD on all computers at once! I have a lot more to talk about here – things like wildcarding, collections, mass edits, multiple file hashing; this is just a taste.

    Performance Tuning

    Microsoft designed DFSR initial sync and ongoing replication behaviors in Windows Server 2003 R2 for the enterprises of 2005: smaller files, slower networks, and smaller data sets. Eight years later, much more data in larger files over wider networks have become the norm.

    Windows Server 2012 R2 modifies two aspects of DFSR to allow new performance configuration options:

    • Cross-file RDC toggling
    • Staging minimum file size

    Cross-File RDC Toggling

    Remote Differential Compression (RDC) takes a staged and compressed copy of a file and creates MD-4 signatures based on “chunks” of files.

    2

    Mark, I stole your pretty diagram and owe you one beer.

    When a user alters a file (even in the middle), DFSR can efficiently see which signatures changed and then send along the matching data blocks. E.g., a 50MB document edited to change one paragraph only replicates a few KB.

    Cross-file RDC takes this further by using special hidden sparse files (located in <drive>:\system volume information\dfsr\similaritytable_x and idrecordtable_x) to track all these signatures. With them, DFSR can use other similar files that the server already has to build a copy of a new file locally. DFSR can use up to five of these similar files. So if an upstream server decides “I have file X and here are its RDC signatures”, the downstream server can decide “I don’t have file X. But I do have files Y and Z that have some of the same signatures, so I’ll grab data from them locally and save having to request all of file X.” Since files are often just copies of other files with a little modification, DFSR gains considerable over-the-wire efficiency and minimizes bandwidth usage on slower, narrower WAN links.

    The downside to cross-file RDC is that over time with many millions of updates and signatures, DFSR may see increased CPU and disk IO while processing similarity needs. Additionally, when replicating on low-latency, high-bandwidth networks like LANS and high-end WANs, it may be faster to disable RDC and Cross-File RDC and simply replicate file changes without the chunking operations. Windows Server 2012 R2 offers this option using Set-DfsrConnection and Add-DfsrConnection Windows PowerShell cmdlets with the–DisableCrossFileRdc parameter.

    Staging File Size Configuration

    DFSR creates a staging folder for each replicated folder. This staging folder contains the marshalled files sent between servers, and allows replication without risk of interruption from subsequent handles to the file. By default, files over 256KB stage during replication, unless RDC is enabled and using its default minimum file size, in which case files over 64KB are staged.

    When replicating on low-latency, high-bandwidth networks like LANS and high-end WANs, it may be faster to allow certain files to replicate without first staging. If users do not frequently reopen files after modification or addition to a content set – such as during batch processing that dumps files onto a DFSR server for replication out to hundreds of nodes without any later modification – skipping the RDC and the staging process can lead to significant performance boosts. You configure this using Set-DfsrMembership and the –MinimumFileStagingSize parameter.

    Database Recovery

    DFSR can suffer database corruption when the underlying hardware fails to write the database to disk. Hardware problems, controller issues, or write-caching preventing flushing of data to the storage medium can cause corruption. Furthermore, when the DFSR service does not stop gracefully – such as during power loss to the underlying operating system – the database becomes “dirty”.

    DB Corruption Merge Recovery

    When DFSR on Windows Server 2012 and older operating systems detects corruption, it deletes the database and recreates it without contents. DFSR then walks the file system and repopulates the database with each file fenced FRS_FENCE_INITIAL_SYNC (1). Then it triggers non-authoritative initial sync inbound from a partner server. Any file changes made on that server that had not replicated outbound prior to the corruption move to the ConflictAndDeleted or PreExisting folders, and end-users will perceive this as data loss, leading to help desk calls. If multiple servers experienced corruption – such as when they were all on the same hypervisor host or all using the same malfunctioning storage array –all servers may stop replicating, as they are all waiting on each other to return to a normal state. If the writable server with corruption was replicating with a read-only server, the writable server will not be able to return to a normal state.

    In Windows Server 2012 R2, DFSR changes its DB corruption recovery behavior. It deletes and recreates the database, then walks the file system and populates the DB with all file records. All files are fenced with the FRS_FENCE_DEFAULT (3) flag though, marking them as normal. The service then triggers initial sync. When subsequent version vector sync reconciles the new DB content with a remote partner, DFSR handles conflicts in the usual way (last writer/creator wins) – since most (if not all) records are marked normal already though, there is no need for conflict handling on matching records. If the remote partner is read-only, DFSR skips attempting to pull changes from the remote partner (since none can come), and goes back to a healthy state.

    DB Dirty Shutdown Recovery

    DFSR on Windows Server 2012 and Windows Server 2008 R2 detects dirty database shutdown and pauses replication on that volume, then writes DFSR event log warning 2213:

    Warning    
    5/1/2013 13:15   
    DFSR 
    2213 
    None
     

    "The DFS Replication service stopped replication on volume C:. This occurs when a DFSR JET database is not shut down cleanly and Auto Recovery is disabled. To resolve this issue, back up the files in the affected replicated folders, and then use the ResumeReplication WMI method to resume replication. 

     

    Additional Information:

    Volume: C:

    GUID: <some GUID> 

     

    Recovery Steps

    1. Back up the files in all replicated folders on the volume. Failure to do so may result in data loss due to unexpected conflict resolution during the recovery of the replicated folders.

    2. To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig class. For example, from an elevated command prompt, type the following command:

    wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="<some GUID>" call ResumeReplication

    Until you manually resume replication via WMI or disable this functionality via the registry, DFSR does not resume. When resumed, DFSR performs operations similar to DB corruption recovery, marking the files normal and synchronizing differences. The main problem here with this strategy is far too many people were missing the event and not noticing that replication wasn’t running anymore. Another good reason to monitor your DFSR servers.

    In Windows Server 2012 R2, DFSR role installation sets the following registry value by default (and if not set to a value, the service treats it as set):

    Key: HKey_Local_Machine\System\CurrentControlSet\Services\DFSR\Parameters

    Value [DWORD]: StopReplicationOnAutoRecovery

    Data: 0

    We performed code reviews to ensure that no issues with dirty shutdown recovery would lead to data loss; we released one hotfix for previous OSes based on this (see KB: http://support.microsoft.com/kb/2780453) but find further issues here.

    Furthermore, the domain controller SYSVOL replica was special-cased so that if it is the only replica on a specific volume and that volume suffers a dirty shutdown, SYSVOL always automatically recovers regardless of the registry setting. The AD admins who don’t know or care that DFSR is replicating their SYSVOL no longer have to worry about such things.

    Preserved File Recovery (The Big Finish!)

    DFSR uses a set of conflict-handling algorithms during initial sync and ongoing replication to ensure that the appropriate files replicate between servers.

    1. During non-authoritative initial sync, cloning, or ongoing replication: files with the same name and path modified on multiple servers move to the following folder on the losing server: <rf>\Dfsrprivate\ConflictAndDeleted

    2. Initial sync or cloning: files with the same name and path that exist only on the downstream server go to <rf>\Dfsrprivate\PreExisting

    3. During ongoing replication: files deleted on a server move to the following folder on all other servers: <rf>\Dfsrprivate\ConflictAndDeleted

    The ConflictAndDeleted folder has a 4GB first in/first out quota in Windows Server 2012 R2 (660MB in older operating systems). The PreExisting folder has no quota. When content moves to these folders, DFSR tracks it in the ConflictAndDeletedManifest.xml and PreExistingManifest.xml. DFSR deliberately mangles all files and folders in the ConflictAndDeleted folder with version vector information to preserve uniqueness. DFSR deliberately mangles the top-level files and folders in the PreExisting folder with version vector information to preserve uniqueness. Previous operating systems did not provide a method to recover data from these folders, and required use of out-of-band script options like RestoreDfsr.vbs (I am rather embarrassed to admit that I wrote that script; my excuse is that it was supposed to be a quick fix for a late night critsit and was never meant to live on for years. Oh well).

    Windows Server 2012 R2 now includes Windows PowerShell cmdlets to recover this data. These cmdlets offer the option to either move or copy files, restore to original or a new location, restore all versions of a file or just the latest, as well as perform inventory operations.

    A few samples

    To see conflicted and deleted files on the H:\rf04 replicated folder:

    PS C:\> Get-DfsrPreservedFiles –Path h:\rf04\DfsrPrivate\ConflictAndDeletedManifest.xml

    Let’s get fancier. To see only the conflicted and deleted DOCX files and their preservation times:

    PS C:\> Get-DfsrPreservedFiles –Path H:\rf04\DfsrPrivate\ConflictAndDeletedManifest.xml | Where-Object path -like *.docx | Format-Table path,preservedtime -auto -wrap

    How about if we restore all files from the PreExisting folder, moving them rather than copying them, placing them back in their original location, super-fast:

    PS C:\> Restore-DfsrPreservedFiles –Path H:\rf04\DfsrPrivate\PreExistingManifest.xml -RestoreToOrigin

    Slick!

    Summary

    We did a ton of work in DFSR in the past few months in order to address many of your long running concerns and bring DFSR into the next decade of file replication; I consider the DB cloning feature to be truly state of the art for file replication or synchronization technologies. We hope you find all these interesting and useful. Stand by for more new blog posts on cloning, Windows PowerShell, reliability, and more – coming soon.

    - Ned “there were no prequels” Pyle

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.

    Work Folders Certificate Management

    $
    0
    0

    In the last blog post, I’ve provided the step by step guide to deploy Work Folders in a test lab environment, where the client is using unsecure connections to the server, which means the server doesn’t require to have SSL certificate. While using unsecure connection simplifies the testing, it is not recommended for production deployment. In this blog, I will provide you a step-by-step guide on configuration of the SSL certificate on the Work Folders server, so that, the client will be able to establish the secure connection to the server using SSL.

    Prerequisites

    You will need the following setup:

    1. Work Folders server set up and configured with at least one sync share. (Let’s name it as SyncSvr.Contoso.com)
    2. One client device running Windows 8.1

    How Work Folders uses certificate?

    The Work Folder client server communication is built using secure HTTP 1.1. According to RFC 2818:

    “In general, HTTP/TLS requests are generated by dereferencing a URI. As a consequence, the hostname for the server is known to the client. If the hostname is available, the client MUST check it against the server's identity as presented in the server's Certificate message, in order to prevent man-in-the-middle attacks.”

    With Work Folders, when a client initiates an SSL negotiation with the server, server sends the certificate to the client, the client will evaluate the certificate, and only continue if the certificate is valid and can be trusted. You can find more details on the SSL handshake here.

    Acquiring a certificate

    In a production deployment, you will need to acquire a certificate from a known certificate authority (CA), such as from VeriSign or obtain a certificate from an online CA in your intranet domain such as your enterprise CA. There are a few things that Work Folders will verify in a server certificate, for example:

    • Expired? : That the current date and time is within the "Valid from" and "Valid to" date range on the certificate.
    • Server identity: That the certificate's "Common Name" (CN) or “Subject Alternative Name” (SAN) matches the host header in the request. For example, if the client is making a request to https://syncsvr.contoso.com, then the CN must also be https://syncsvr.contoso.com.
    • Trusted CA? : That the issuer of the certificate is a known and trusted CA.
    • Revoked? The certificate has been revoked.

    This MSDN page more information on certificate validations: http://technet.microsoft.com/en-us/library/cc700843.aspx 

    There are also a few different types of certificates you could acquire:

    • One certificate with one hostname
    • One certificate with multiple hostnames (SAN certificate)
    • Wild card certificate for a domain (Wild card certificate)

    You can evaluate the options based on:

    1. The number of sync servers you are deploying.
    2. Using Work Folders discovery? If you are planning to use discovery based on the user email address, you will need to publish https://WorkFolders.<domainname>, such as https://WorkFolders.contoso.com, all the sync servers that act as the discovery servers (by default, all sync server can perform discovery), will need to install a multiple hostnames (SAN) certificate which contains the hostname matches to the WorkFolders.contoso.com and the name used to access the sync servers.
    3. Ease of management. Certificates are generally valid for 1 or 2 years — the more certificates you have, the more you need to monitor and renew.
    4. If you only deploy one sync server, you just need one certificate with one hostname on it.

    For example, if you are planning to deploy one sync server, you will need to request a certificate with the following hostname on it: https://WorkFolders.contoso.com

    Getting certificate from a trusted CA

    There are many commercial CAs from which you can purchase the certificate from. You can find the CAs trusted by Microsoft on this page: http://support.microsoft.com/kb/931125

    To purchase certificate, you need to visit the CA’s website, and follow the instructions on the website. Each CA has a different procedure on certificate purchase.

    Testing using Self-signed certificate

    For testing, you can also use a self-signed certificate. To generate self-signed certificate, type the following in a Windows PowerShell session:

    PS C:\> New-SelfSignedCertificate -DnsName <server dns names> -CertStoreLocation cert:Localmachine\My

    For example:

    PS C:\> New-SelfSignedCertificate -DnsName “SyncSvr.Contoso.com”,”WorkFolders.Contoso.com” -CertStoreLocation cert:Localmachine\My

    Certificates from Enterprise CA

    Windows Server Active Directory Certificate Services (AD CS) can issue certificates in a domain environment, you may think of getting the sync server certificate from the AD CS. However, the limitation with certificates issued by AD CS, is that it will not be trusted by the devices which are not domain joined. Work Folders is designed to support BYOD, and it is expected that the devices are not domain joined, therefore user needs to go through some manual steps to trust the Enterprise CA. The instructions are available on this TechNet page: http://technet.microsoft.com/en-us/library/cc750534.aspx.

    Installing a certificate

    Once you get the certificate, you need to install it on the Work Folders server and the reverse proxy server, since both servers will handle the sync request. Copy the certificate to the Work Folders server and the reverse proxy server, and import the certificate:

    PS C:\> Import-PfxCertificate –FilePath <certFile.pfx> -CertStoreLocation cert:LocalMachine\My

    Note that, if you are testing with a self-signed certificate on the Work Folder server, you don’t need to import the certificate again. As the self-signed certificate is already been installed on the local server.

    Configure SSL certificate binding

    To bind the SSL certificate, open an elevated cmdline window (NOT a Windows PowerShell window), and run:

    netsh http add sslcert ipport=0.0.0.0:443 certhash=<Cert thumbprint> appid={CE66697B-3AA0-49D1-BDBD-A25C8359FD5D} certstorename=MY

    Note:

    • The cmd above will bind the certificate to the root (all hostnames on the server) of the server with port 443.
    • You can get the certificate thumbprint by running the following cmdlet:

    PS C:\>Get-ChildItem –Path cert:\LocalMachine\My

    Testing using self-signed certificate

    The steps below are only for testing purposes using the self-signed certificate, because the self-signed certificates are not trusted by client by default. The steps are not necessary if the certificates are purchased from the well-known trusted CAs.

    On the server:

    1. Exports the self-signed certificates from the server. Run the following cmdlets on the sync server:

    PS C:\>$cert = Get-ChildItem –Path cert:\LocalMachine\My\<thumbprint>

    PS C:\>Export-Certificate –Cert $cert –Filepath servercert.p7b –Type P7B

    Note: you can change the –FilePath parameter input to specify a location you want the certificate to be exported to. You will need to remember the location, so in the next step, you can copy the files to the client devices.

    2. Copy the certificate file to the testing client device.

    On the client:

    3. On the client machine, install the certificates, by right click on the certificate file, and select “Install certificate”.

    4. Follow the wizard to install the certificate in the “Trusted Root Certification Authorities”.

    clip_image002

    5. Complete the installation

    Client setup

    1. Open the Control Panel -> System and Security->Work Folders
    2. Provide the email address for the user (for example U1@contoso.com) Or entering the Url directly if Work Folder discovery is not configured in the deployment.
    3. Check the setup completes and client is able to sync files afterwards.

    Certificate monitoring

    Once the certificate in place, you need to monitor it for cases such as certificate renewal.

    This TechNet article provides a good overview and cmdlets to manage certificates:

    http://social.technet.microsoft.com/wiki/contents/articles/14250.certificate-services-lifecycle-notifications.aspx

    Cluster consideration

    Work Folders supports running in failover cluster mode. Below is a quick check list on configuring certificates in clusters:

    1. Acquire the certificate, make sure the hostname matches the Work Folder VCO name, not the node name. The VCO name is the virtual Work Folders can be moved to different nodes in a cluster.
    2. Import the certificate to all the cluster nodes, as described in Installing certificate section.
    3. Bind certificate on each of the cluster nodes, as described in Binding section.
    4. Configure certificates notification on each of the cluster nodes, as described in Monitoring section.

    If you are deploying multiple VCOs in a cluster, you need to make sure the certificate contains all the VCO hostnames as well as the discovery hostname, or using a wildcard certificate.

    For example, in the following setup:

    1. 2 node cluster, with N1 and N2 as the machine names.
    2. Created 2 VCOs (WF1.contoso.com and WF2.contoso.com) to support work folders.
    3. When acquiring certificates, you need to make sure the certificate contains the hostnames: WF1.contoso.com and WF2.contoso.com
    4. And the certificate is installed and bind to both nodes of the cluster (machines N1 and N2), and reverse proxy server.

    Note: the clustering support is not in the Preview release of Server 2012 R2, please make sure to test clustering with the RTM builds.

    Conclusion

    The Work Folders server requires SSL certificate for client to connect by default, and is highly recommended for production deployment. I hope this blog post provides you the details on how you can configure and manage the SSL certificates on the Work Folder servers. If you have questions not covered here, please raise it in the comments so that I can address it with upcoming postings.

    DFS Replication in Windows Server 2012 R2: If You Only Knew the Power of the Dark Shell

    $
    0
    0

    Hi folks, Ned here again. By now, you know that DFS Replication has some major new features in Windows Server 2012 R2. Today we dig into the most comprehensive new feature, DFSR Windows PowerShell.

    Yes, your thoughts betray you

    “IT pros have strong feelings about Windows PowerShell, but if they can be turned, they’d be a powerful ally. “

    - Darth Ned

    It’s not surprising if you’re wary. We’ve been beating the Windows PowerShell drum for years now, but sometimes, new cmdlets don’t offer better ways to do things, only different ways. If you were already comfortable with the old command-line tools or attached to the GUI, why bother learning more of the same? I spent many years in the field before I came to Redmond and I’ve felt this pain.

    As the DFSR development team, we wanted to be part of the solution. It led to a charter for our Windows PowerShell design process:

    1. The old admin tools work against one node at a time – DFSR Windows PowerShell should scale without extensive scripting.

    2. Not everyone is a DFSR expert – DFSR Windows PowerShell should default to the recommended configuration.

    3. Parity with old tools is not enough – DFSR Windows PowerShell should bring new capabilities and solve old problems.

    We then devoted ourselves to this, sometimes arguing late into the night about a PowerShell experience that you would actually want to use.

    Today we walk through all of these new capabilities and show you how, with our combined strength, we can end this destructive conflict and bring order to the galaxy.

    Join me, and I will complete your training

    Let’s start with the simple case of creating a replication topology with two servers that will be used to synchronize a single folder. In the old DFSR tools, you would have two options here:

    1. Run DFSMGMT.MSC, browsing and clicking your way through adding the servers and their local configurations.

    2. Run the DFSRADMIN.EXE command-line tool N times, or run N arguments as part of the BULK command-line option.

    To setup only two servers with DFSMGMT, I have to go through all these dialogs:

    image

    To setup a simple hub and two-spoke environment with DFSRADMIN, I need to run these 12 commands:

    dfsradmin rg new /rgname:"software"

    dfsradmin rf new /rgname:software /rfname:rf01

    dfsradmin mem new /rgname:software /memname:srv01

    dfsradmin mem new /rgname:software /memname:srv02

    dfsradmin mem new /rgname:software /memname:srv03

    dfsradmin conn new /rgname:software /sendmem:srv01 /recvmem:srv02

    dfsradmin conn new /rgname:software /sendmem:srv02 /recvmem:srv01

    dfsradmin conn new /rgname:software /sendmem:srv01 /recvmem:srv03

    dfsradmin conn new /rgname:software /sendmem:srv03 /recvmem:srv01

    dfsradmin membership set /rgname:software /rfname:rf01 /memname:srv01 /localpath:c:\rf01 /isprimary:true

    dfsradmin membership set /rgname:software /rfname:rf01 /memname:srv02 /localpath:c:\rf01

    dfsradmin membership set /rgname:software /rfname:rf01 /memname:srv03 /localpath:c:\rf01

    Facepalm. Instead of making bulk operations easier, the DFSRADMIN command-line has given me nearly as many steps as the GUI!

    Worse, I have to understand that the options presented by these old tools are not always optimal – for instance, DFS Management creates the memberships disabled by default, so that there is no replication. The DFSRADMIN tool requires remembering to create connections in both directions; if I don’t, I have created an unsupported and disconnected topology that may eventually cause data loss problems. These are major pitfalls to DFSR administrators, especially when first learning the product.

    Now watch this with DFSR Windows PowerShell:

    New-DfsReplicationGroup -GroupName "RG01" | New-DfsReplicatedFolder -FolderName "RF01" | Add-DfsrMember -ComputerName SRV01,SRV02,SRV03

    I just added RG, RF, and members with one pipelined command with minimal repeated parameters, instead of five individual commands with repeated parameters. If you are really new to Windows PowerShell, I suggest you start here to understand pipelining. Now:

    Add-DfsrConnection -GroupName "rg01" -SourceComputerName srv01 -DestinationComputerName srv02

    Add-DfsrConnection -GroupName "rg01" -SourceComputerName srv01 -DestinationComputerName srv03

    I just added the hub and spoke connections here with a pair of commands instead of four, as the PowerShell creates bi-directionally by default instead of one-way only. Now:

    Set-DfsrMembership -GroupName "rg01" -FolderName "rf01" -ComputerName srv01 -ContentPath c:\rf01 –PrimaryMember $true

    Set-DfsrMembership -GroupName "rg01" -FolderName "rf01" -ComputerName srv02,srv03 -ContentPath c:\rf01

    Finally, I added the memberships that enable replication and specify the content to replicate, using only two commands instead of three.

    Out of the gate, DFSR Windows PowerShell saves you a significant amount of code generation and navigation. Better yet, it defaults to recommended configurations. It supports collections of servers, not just one at a time. With tabbed autocomplete, parameters always in the same order, mandatory parameters where required, and everything else opt-in, it is very easy to pick up and start working right away. We even added multiple aliases with shortened parameters and even duplicates of DFSRADMIN parameters.

    Watch here as Windows PowerShell autocompletes all my typing and guides me through the minimum required commands to setup my RG:

    Let’s scale this up - maybe I want to create a 100 server, read-only, hub-and-spoke configuration for distributing software. I can create a simple one-server-per-line text file named “spokes.txt” containing all my spoke servers – perhaps exported from AD with Get-AdComputer– then create my topology with DFSR Windows PowerShell. It’s as simple as this:

    $rg = "RG01"
    $rf = "RF01"
    $hub = "SRV01"
    $spokes = Get-content c:\temp\spokes.txt

    New-DfsReplicationGroup -RG $rg | New-DfsReplicatedFolder -RF $rf | Add-DfsrMember -MemList ($spokes + $hub)

    $spokes | % {Add-DfsrConnection -GroupName $rg -SMem $hub -RMem $_}

    Set-DfsrMembership -RG $rg -RF rf01 -ComputerName $hub -ContentPath c:\rf01 –PrimaryMember $true -force

    Set-DfsrMembership -RG $rg -RF rf01 -ComputerName $spokes -ContentPath c:\rf01 –Force -ReadOnly $true

    Done! 100 read-only servers added in a hub and spoke, using four commands, a text file, and some variables and aliases used to save my poor little nubbin fingers. Not impressed? If this were DFSRADMIN.EXE, it would take 406 commands to generate the same configuration. And if you used DFSMGMT.MSC, you’d have to navigate through this:

    image
    That was not fun

    With the underlying DFSR Windows PowerShell, you now have very easy scripting options to tie together cmdlets into basic “do everything for me with one command” functions, if you prefer. Here’s a simple example put together by our Windows PowerShell developer, Daniel Ong, that shows this off:

    Configure DFSR using Windows PowerShell

    It’s pretty nifty, check out this short demo video.

    The sample is useable for simpler setup cases and also demonstrates (with plenty of comments!) exactly how to write your very own DFSR scripts.

    I find your lack of faith disturbing

    Still not convinced, eh? Ok, we’ve talked topology creation – now let’s see the ongoing management story.

    Let’s say I’m the owner of an existing set of replication groups and replicated folders scattered across dozens or hundreds of DFSR nodes throughout the domain. This is old stuff, first set up years ago when bandwidth was low and latency high. Consequently, there are custom DFSR replication schedules all over the connections and RGs. Now I finally have brand new modern circuits to all my branch offices and the need for weird schedules is past. I start to poke around in DFSMGMT and see that undoing all these little nuggets is going to be a real pain in the tuchus, as there are hundreds of customizations.

    What would DFSR Windows PowerShell do?

    Get-DfsrConnection -GroupName * | Set-DfsrConnectionSchedule -ScheduleType UseGroupSchedule

    Set-DfsrGroupSchedule -GroupName * -ScheduleType Always

    With those two simple lines, I just told DFSR to:

    1. Set all connections in all replication groups to use the replication group schedule instead of their custom connection schedules

    2. Then set all the replication group schedules to full bandwidth, open 24 hours a day, 7 days a week.

    Now that I have an updated schedule, I must wait for all the DFSR servers to poll active directory individually and pick up these changes, right? No! This can take up to an hour, and I have things do. I want them all to update right now:

    Get-DfsrMember -GroupName * | Update-DfsrConfigurationFromAD

    Oh baby! If I was still using DFSRDIAG.EXE POLLAD, I’d be on server 8 of 100 by the time that cmdlet returned from doing all of them.

    Since things are going so well, I think I’ll kick back and read some DFSR best practices info from Warren Williams. Hmmm. I should configure a larger staging quota in my software distribution environment, as these ISO and EXE files are huge and causing performance bottlenecks. According to the math, I need at least 32 GB of staging space on this replicated folder. Let’s make that happen:

    Get-DfsrMember -GroupName "rg01" | Set-DfsrMembership -FolderName "rf01" -StagingPathQuotaInMB (1024 * 32) -force

    That was painless – I don’t have to figure out the server names and I don’t have to whip out Calc to figure out that 32GB is 32,768 megabytes. This wildcarding and pipelining capability is powerful stuff in the right hands.

    It’s not all AD here, by the way – we greatly extended the ease of operations without the need for WMIC.EXE, DFSRDIAG.EXE, or crappy scripts made by Ned years ago. For instance, if you’re troubleshooting with Microsoft Support and they say, “I want you to turn up the DFSR debug logging verbosity and number of logs on all your servers”, you can now do this with a single easy command:

    Get-DfsrMember -GroupName * | Set-DfsrServiceConfiguration -DebugLogSeverity 5 -MaximumDebugLogFiles 1250

    Or what if I just set up replication and accidentally chose the empty folder as the primary copy, resulting in all my files moving into the hidden PreExisting folder, I can now easily move them back:

    Restore-DfsrPreservedFiles -Path "C:\RF01\DfsrPrivate\PreExistingManifest.xml" -RestoreToOrigin

    Dang, that hauls tail! Restore-DfsrPreservedFiles is so cool that it rates its own blog post (coming soon).

    I sense something; a presence I've not felt since…

    This new setup should be humming now – no schedule issues, big staging, no bottlenecks. Let’s see just how fast it is – I’ll create a series of propagation reports for all replicated folders in an RG, let it fan out overnight on all nodes, and then look at it in the morning:

    Start-DfsrPropagationTest -GroupName "rg01" -FolderName * -ReferenceComputerName srv01

    <snore, snore, snore>

    Write-DfsrPropagationReport -GroupName "rg01"-FolderName * -ReferenceComputerName srv01  -verbose

    Now I have as many propagation reports as I have RFs. I can scheduled this easily too which means I can have an ongoing, lightweight, and easily understood view of what replication performance is like in my environment. If I change –GroupName to use “*”, and I had a reference computer that lived everywhere (probably a hub), I can easily create propagation tests for the entire environment.

    While we’re on the subject of ongoing replication:

    Tell me the first 100 backlogged files and the count, for all RFs on this server, with crazy levels of detail:

    Get-DfsrBacklog -GroupName rg01 -FolderName * -SourceComputerName srv02 -DestinationComputerName srv01 -verbose

    image

    Whoa, less detail please:

    Get-DfsrBacklog -GroupName rg01 -FolderName * -SourceComputerName srv02 -DestinationComputerName srv01 -verbose | ft FullPathName

    image

    Seriously, I just want the count!

    (Get-DfsrBacklog -GroupName "RG01" -FolderName "RF01" -SourceComputerName SRV02 -DestinationComputerName SRV01 -Verbose 4>&1).Message.Split(':')[2]

    image
    Boing boing boing

    Tell me the files currently replicating or immediately queued on this server, sorted with on-the-wire files first:

    Get-DfsrState -ComputerName srv01 | Sort UpdateState -descending | ft path,inbound,UpdateState,SourceComputerName -auto -wrap

    image

    Compare a folder on two servers and tell me if all their immediate file and folder contents are identical and they are synchronized:

    net use x: \\Srv01\c$\Rf01

    Get-DfsrFileHash x:\* | Out-File c:\Srv01.txt

    net use x: /d

    net use x:
    \\Srv02\c$\Rf01

    Get-DfsrFileHash x:\* | Out-File c:\Srv02.txt

    net use x: /d

    Compare-Object -ReferenceObject (Get-Content C:\Srv01.txt) -DifferenceObject (Get-Content C:\Srv02.txt)

    image

    Tell me all the deleted or conflicted files on this server for this RF:

    Get-DfsrPreservedFiles -Path C:\rf01\DfsrPrivate\ConflictAndDeletedManifest.xml | ft preservedreason,path,PreservedName -auto

    image

    Wait, I meant for all RFs on that computer:

    Get-DfsrMembership -GroupName * -ComputerName srv01 | sort path | % { Get-DfsrPreservedFiles -Path ($_.contentpath + "\dfsrprivate\conflictanddeletedmanifest.xml") } | ft path,PreservedReason

    image

    Tell me every replicated folder for every server in every replication group in the whole domain with all their details, and I don’t want to type more than one command or parameter or use any pipelines or input files or anything! TELL ME!!!

    image

    I guess I got a bit excited there. You know how it is.

    These are the cmdlets you’re looking for

    For experienced DFSR administrators, here’s a breakout of the Dfsradmin.exe and Dfsrdiag.exe console applications to their new Windows PowerShell cmdlet equivalents. Look for the highlighted superscript notes for those that don’t have direct line-up.

    Legacy Tool Commands

    Windows PowerShell Cmdlet

    DFSRADMIN BULK

    No direct equivalent, this is implicit to Windows PowerShell

    DFSRADMIN CONN NEW

    Add-DfsrConnection

    DFSRADMIN CONN DELETE

    Remove-DfsrConnection

    DFSRADMIN CONN EXPORT

    No direct equivalent, use Get-DfsrConnectionSchedule1

    DFSRADMIN CONN IMPORT

    No direct equivalent, use Set-DfsrConnectionSchedule1

    DFSRADMIN CONN LIST

    Get-DfsrConnection

    DFSRADMIN CONN LIST SCHEDULE

    Get-DfsrConnectionSchedule

    DFSRADMIN CONN SET

    Set-DfsrConnection

    DFSRADMIN CONN SET SCHEDULE

    Set-DfsrConnectionSchedule

    DFSRADMIN HEALTH NEW

    Write-DfsrHealthReport

    DFSRADMIN MEM DELETE

    Remove-DfsrMember

    DFSRADMIN MEM LIST

    Get-DfsrMember

    DFSRADMIN MEM NEW

    Add-DfsrMember

    DFSRADMIN MEM SET

    Set-DfsrMember

    DFSRADMIN MEMBERSHIP DELETE

    No direct equivalent2

    DFSRADMIN MEMBERSHIP LIST

    Get-DfsrMembership

    DFSRADMIN MEMBERSHIP SET

    Set-DfsrMembership

    DFSRADMIN MEMBERSHIP NEW

    No direct equivalent, use Set-DfsrMembership3

    DFSRADMIN PROPREP NEW

    Write-DfsrPropagationReport

    DFSRADMIN PROPTEST CLEAN

    Remove-DfsrPropagationTestFile

    DFSRADMIN PROPTEST NEW

    Start-DfsrPropagationTest

    DFSRADMIN RF DELETE

    Remove-DfsReplicatedFolder

    DFSRADMIN RF LIST

    Get-DfsReplicatedFolder

    DFSRADMIN RF NEW

    New-DfsReplicatedFolder

    DFSRADMIN RF SET

    Set-DfsReplicatedFolder

    DFSRADMIN RG DELETE

    Remove-DfsReplicationGroup

    DFSRADMIN RG IMPORT

    No direct equivalent, use Get-DfsrGroupSchedule1

    DFSRADMIN RG EXPORT

    No direct equivalent, use Set-DfsrGroupSchedule1

    DFSRADMIN RG LIST

    Get-DfsReplicationGroup

    DFSRADMIN RG LIST SCHEDULE

    Get-DfsrGroupSchedule

    DFSRADMIN RG NEW

    New-DfsReplicationGroup

    DFSRADMIN RG SET

    Set-DfsReplicationGroup

    DFSRADMIN RG SET SCHEDULE

    Set-DfsrGroupSchedule

    DFSRADMIN RG DELEGATE

    No direct equivalent, use Set-Acl4

    DFSRADMIN SUB LIST

    No direct equivalent5

    DFSRADMIN SUB DELETE

    No direct equivalent5

    DFSRDIAG BACKLOG

    Get-DfsrBacklog

    DFSRDIAG DUMPADCFG

    No direct equivalent, use Get-AdObject6

    DFSRDIAG DUMPMACHINECONFIG

    Get-DfsrServiceConfiguration

    DFSRDIAG FILEHASH

    Get-DfsrFileHash

    DFSRDIAG GUID2NAME

    ConvertFrom-DfsrGuid

    DFSRDIAG IDRECORD

    Get-DfsrIdRecord

    DFSRDIAG POLLAD

    Update-DfsrConfigurationFromAD

    DFSRDIAG PROPAGATIONREPORT

    Write-DfsrPropagationReport

    DFSRDIAG PROPAGATIONTEST

    Start-DfsrPropagationTest

    DFSRDIAG REPLICATIONSTATE

    Get-DfsrState

    DFSRDIAG STATICRPC

    Set-DfsrServiceConfiguration

    DFSRDIAG STOPNOW

    Suspend-DfsReplicationGroup

    DFSRDIAG SYNCNOW

    Sync-DfsReplicationGroup

    No equivalent7

    Get-DfsrPreservedFiles

    No equivalent7

    Restore-DfsrPreservedFiles

    No equivalent8

    Get-DfsrCloneState

    No equivalent8

    Import-DfsrClone

    No equivalent8

    Reset-DfsrCloneState

    No equivalent8

    Export-DfsrClone

    No equivalent9

    Set-DfsrServiceConfiguration

    1 Mainly because they were pretty dumb and we found no one using them. However, you can export the values using Get-DfsrConnectionSchedule or Get-DfsrGroupSchedule and pipeline them with Out-File or Export-CSV. Then you can use Get-Content or Import-CSV to import them with Set-DfsrConnectionSchedule or Get-DfsrGroupSchedule.

    2 Paradoxically, these old commands leaves servers in a non-recommended state. To remove memberships from replication altogether in an RG, use Remove-DfsrMember (this is the preferred method). To remove a server from a specific membership but leave them in an RG, set their membership state to disabled using Set-DfsrMembership –DisableMembership $true.

    3 DFSR Windows PowerShell implements DFSRADMIN MEMBERSHIP NEW implicitly via the New-DfsReplicatedFolder cmdlet, which removes the need to create a new membership then populate it.

    4 You can use the Get-Acl and Set-Acl cmdlets in tandem with the Get-AdObject Active Directory cmdlet to configure delegation on the RG objects. Or just keep using the old tool, I suppose.

    5 The DFSRADMIN SUB DELETE command was only necessary because of the non-recommended DFSRADMIN MEMBERSHIP DELETE command. To remove DFSR memberships in a supported and recommended fashion, see note 2 above.

    6 Use the Get-AdObject Active Directory cmdlet against the DFSR objects in AD to retrieve this information (with considerably more details).

    7 The legacy DFSR administration tools do not have the capability to list or restore preserved files from the ConflictAndDeleted folder and the PreExisting folder. Windows Server 2012 R2 introduced these capabilities for the first time as in-box options via Windows PowerShell.

    8 The legacy DFSR administration tools do not have the capability to clone databases. Windows Server 2012 R2 introduced these capabilities for the first time as in-box options via Windows PowerShell.

    9 The legacy DFSR administration tools do not have the full capabilities of Set-DfsrServiceConfiguration. Administrators instead had to make direct WMI calls via WMIC or Get-WmiObject/Invoke-WmiMethod. These included the options to configure debug logging on or off, maximum debug log files, debug log verbosity, maximum debug log messages, dirty shutdown autorecovery behavior, staging folder high and low watermarks, conflict folder high and low watermarks, and purging the ConflictAndDeleted folder. These are all now implemented directly in the new cmdlet.

    Give yourself to the Dark Side

    It’s the age of Windows PowerShell, folks. The old DFSR tools are relic of a bygone era and the main limit now is your imagination. Once you look through the DFSR Windows PowerShell online or downloadable help, you’ll find that we gave you 82 examples just to get your juices flowing here. From those, I hope you end up creating perfectly tailored solutions to all your day-to-day DFSR administrative needs.

    - Ned “No. I am your father!” Pyle

    DFS Replication Initial Sync in Windows Server 2012 R2: Attack of the Clones

    $
    0
    0

    Hi folks, Ned here again. By now, you know that DFS Replication has some major new features in Windows Server 2012 R2. Today I talk about one of the most radical: DFSR database cloning.

    Prepare for a long post, this has a walkthrough…

    The old ways are not always the best

    DFSR – or any proper file replication technology - spends a great deal of time validating that servers have the same knowledge. This is critical to safe and reliable replication; if a server doesn’t know everything about a file, it can’t tell its partner about that file. In DFSR, we often refer to this “initial build” and “initial sync” processing as “initial replication”. DFSR needs to grovel files and folders, record their information in a database on that volume, exchange that information between nodes, stage files and create hashes, then transmit that data over the network. Even if you preseed the files on each server before configuring replication, the metadata transmissions are still necessary. Each server performs this initial build process locally, and then the non-authoritative server checks his work against an authoritative copy and reconciles the differences.

    This process is necessarily very expensive. Heaps of local IO, oodles of network conversation, tons of serialized exchanges based on directory structures. As you add bigger and more complex datasets, initial replication gets slower. A replicated folder that contains tens of millions of preseeded files can take weeks to synchronize the databases, even with preseeding stopping the need to send the actual files.

    Furthermore, there are times when you need to recreate replication of previously synchronized data, such as when:

    1. Upgrading operating systems

    2. Replacing computer hardware

    3. Recovering from a disaster

    4. Redesigning the replication topology

    Any one of these requires re-running initial replication on at least one node. This has been the way of DFSR since Microsoft introduced it in Windows Server 2003 R2.

    Cutting out the middle man

    DFSR database cloning is an optional alternative to so-called classic initial replication. By providing each downstream server with an exported copy of the upstream server’s database and preseeded files, DFSR reduces or eliminates the need for over-the-wire metadata exchange. DFSR database cloning also provides multiple file validation levels to ensure reconciliation of files added, modified, or deleted after the database export but before the database import. After file validation, initial sync is now instantaneous if there are no differences. If there are differences, DFSR only has to synchronize the delta of changes as part of a shortened initial sync process.

    We are talking about fundamental, state of the art performance improvements here, folks. To steal from my previous post, let’s compare a test run with ~10 terabytes of data in a single volume comprising 14,000,000 preseeded files:

    “Classic” initial sync

    Time to convergence

    Preseeded

    ~24 days

    Now, with DB cloning:

    Validation Level

    Time to export

    Time to import

    Improvement %

    2 – Full

    9 days, 0 hours

    5 days, 10 hours

    40%

    1 – Basic

    2 hours, 48 minutes

    9 hours, 37 minutes

    98%

    0 – None

    1 hour, 13 minutes

    6 hours, 8 minutes

    99%

    I think we can actually do better than this – we found out recently that we’re having some CPU underperformance in our test hardware. I may be able to re-post even better numbers someday.

    For instance, here I created exactly one million files and cloned that volume, using VMs running on a 3 year old test server.

    The export:

    image

    The import:

    image

    That’s just over 12 minutes total. It’s awesome, like a grizzly bear that shoots lasers from its eyeballs. Yes, I own of these shirts and I am not ashamed.

    At a high level

    Let’s examine the mainline case of creating a new replication topology using DB cloning:

    1. You createa replication group and a replicated folder, then add a server as a member of that topology (but no partners, yet). This will be the “upstream” (source) server.

    2. You let initial build complete

    3. You export the cloned database from the upstream server

    4. You preseed the files to the downstream (destination) server and copy in the exported clone DB files

    5. You import the cloned database on the downstream server

    6. You add the downstream server to the replication group and RF membership, just like classic DFSR

    7. You let the initial sync validation complete

    If you did everything right, step 7 is done instantly, and the server is now replicating normally for all further file additions, modifications, and deletions. It’s straightforward stuff, with only a handful of steps.

    Walkthrough

    Let’s get some hands-on with DB cloning. Below is a walkthrough using the new DFSR Windows PowerShell module and the mainstream “setting up a new replication topology” scenario.

    Requirements and sample setup

    • Active Directory domain with at least one domain controller (does not need to run Windows Server 2012 R2)
    • AD schema updated to at least Windows Server 2008 R2 (there are no forest or domain functional level requirements)
    • Two file servers running Windows Server 2012 R2 and joined to the domain (Windows Server 2012 and earlier file servers cannot participate in cloning scenarios, but do support replication with Windows Server 2012 R2)

    You can use virtualized DFSR servers or physical ones; it makes no difference. This walkthrough uses the following domain environment as an example:

    • One domain controller
    • Two member servers, named SRV01 and SRV02

    Configure DFSR

    To configure the DFSR role on SRV01 and SRV02 using Windows PowerShell, run the following command on each server:

    Install-WindowsFeature –Name Fs-Dfs-Replication -IncludeManagementTools

    Alternatively, to configure the DFSR role using Server Manager:

    1. Start Server Manager.

    2. Click Manage, and then click Add Roles and Features.

    3. Proceed to the Server Roles page, then select DFS Replication, leave the default option to install the Remote Server Administration Tools selected, and continue to the end.

    Configure volumes

    On SRV01 and SRV02, configure an F:, G:, and H: drive with NTFS. Each drive should be at least 2GB in size. If your test servers do not already have these drives configured or don’t have additional disks, you can shrink the existing C: volume with Resize-Partition, DiskMgmt.Msc, or Diskpart.exe, and then format the new volumes. Multiple drives allows you test cloning multiple times without starting over too often – remember, DFSR databases are per-volume, and therefore cloning is as well.

    image

    For example, using Windows PowerShell with a virtual machine that has one 40GB disk and C: volume:

    Get-Partition | Format-Table -auto

    Resize-Partition -DiskNumber 0 -PartitionNumber 1 -Size 33GB

    New-Partition -DiskNumber 0 –Size 2GB -DriveLetter f | Format-Volume

    New-Partition -DiskNumber 0 –Size 2GB -DriveLetter g | Format-Volume

    New-Partition -DiskNumber 0 –Size 2GB -DriveLetter h | Format-Volume

     

    Clone a DFSR database

    1. On the upstream server SRV01 only, create H:\RF01 and create or copy in some test files (such as by copying the 2,000 largest immediate file contents of the C:\Windows\SysWow64 folder).

    Important:Windows Server 2012 R2 Preview contains a bug that restricts cloning to under 3,100 files and folders – if you add more files, cloning export never completes. Ehhh, sorry: we fixed this issue before Preview shipped but even then it was too late due to the build’s age. Do not attempt to clone more than 3,100 files while using the Preview version of Windows Server 2012 R2 Basic validation. If you want to use more files, use –Validation None. The RTM version of DFSR DB cloning will not have this limitation.

    Use the New-DfsReplicationGroup, New-DfsReplicatedFolder, Add-DfsrMember, and Set-DfsrMembership cmdlets to create a replicated folder and membership for SRV01 only, using only the H:\RF01 directory replicated folder. You must specify PrimaryMember as $True for this membership, so that the server performs initial build with no need for partners. You can run these commands on any server.

    Note: Do not add SRV02 as a member nor create a connection between the servers in this new RG. We don’t want that server starting classic replication.

    Sample:

    New-DfsReplicationGroup -GroupName "RG01"

    New-DfsReplicatedFolder -GroupName "RG01" -FolderName "RF01"

    Add-DfsrMember -GroupName "RG01" -ComputerName SRV01

    Set-DfsrMembership -GroupName "RG01" -FolderName "RF01" -ContentPath "H:\RF01" -ComputerName SRV01 -PrimaryMember $True

    Update-DfsrConfigurationFromAD –ComputerName SRV01

    Note the sample output below and how I used the built-in –Verbose parameter to see more AD polling details:

    image

    image

    2. Wait for a DFS Replication Event 4112 in the DFS Replication Event Log, which indicates that the replication folder initialized successfully as primary.

    Note below in the sample output how I have a 6020 event; in a cloning scenario, it is expected and supported, unlike the implied messaging.

    image

    3. Export the cloned database and volume config XML for the H: drive. Export requires the output folder for the database and configuration XML file already exist. It also requires that no replicated folders on that volume be in an initial build or initial sync phase of processing.

    Sample:

    New-Item -Path "H:\Dfsrclone" -Type Directory

    Export-DfsrClone -Volume H: -Path "H:\Dfsrclone"

    Note the use of the –Validation parameter in the sample out below. Cloning provides three levels of file validation during the export and import processing. These ensure that if you are allowing users to alter data on the upstream server while cloning is occurring, files are later reconciled on the downstream.

    • None - No validation of files on source or destination server. Fastest and most optimistic. Requires that you preseed data perfectly. Any modification of data during the clone processing on the servers will not be detected or replicated until it is later modified after cloning.
    • Basic - (Default behavior, Microsoft recommended). Each file’s existing database record is updated with a hash of the ACL, the file size, and he last modified time. Good mix of fidelity and performance. This is the recommended validation level, and the maximum one you should use if you are replicating more than 10TB of data. Yes, we are going to support much more than 10TB and 11M files in WS2012 R2 as long as you use cloning; we’ll give you an official number at RTM.
    • Full - Same hashing mechanism used by DFSR during normal operations. Hash stored in database record for each file. Slowest but highest fidelity. If you exceed 10TB, we do not recommend using this value due to the comparatively poor performance.

    We recommend that you do not allow users to add, modify, or delete files on the source server as this makes cloning less effective, but we realize you live in the real world. Hence, the validation code.

    Important:You should not let users modify or access files on the downstream (destination) server until cloning completes end-to-end and replication is working. This is no different from our normal “classic” initial sync replication best practice for the past 8 years of DFSR, as there is a high likelihood that users will lose their changes through conflict resolution or movement to the preexisting files store. If users need to access these files and make changes, only allow them to access the original source server from which you exported.

    image

    Note the hint outputs above. The export cmdlet shows a suggested copy command for the database export folder. It also suggests preseeding hints for any replicated folders on that volume that will clone. All you have to do is fill in your destination server name and RF path.

    4. Wait for a DFS Replication Event 2402 in the DFS Replication Event Log, which indicates that the export completed successfully. As you can see from the sample outputs, there are four event IDs of note when exporting: 2406, 2410 (there may be many of these, they are progress indicators), 2402, and finally 2002 (which brings the volume back online for normal replication).

    image

    As you can see from my example, I cloned more than 3,100 files. I told you we fixed it already!

    5. Preseed the file and folder data from the source computer to the new downstream computer that will clone the DFS Replication database.

    Important:There should be no existing replicated folder content (folders, files, or database) on the downstream server's volume that will perform cloning – let the preseeding fill it all in in this mainstream scenario. Microsoft recommends that you do not create network shares to the data until completion of cloning and do not allow users to add, modify, or change files on the downstream server until post-initial replication is operational.

    Sample preseeding command hint:

    Robocopy.exe "H:\RF01" "\\SRV02\H$\RF01" /E /B /COPYALL /R:6 /W:5 /MT:64 /XD DfsrPrivate /TEE /LOG+:preseed.log

    Important:Do not use the robocopy /MIR option on the root of a volume, do not manually create the replicated folder on the downstream server, and do not run robocopy files that you already copied previously (i.e. if you have to start over, delete the destination folder and file structure and really start over). Let robocopy create all folders and copy all contents to the downstream server, via the /e /b /copyall options, every time you run it. Otherwise, you are very likely to end up with hash mismatches.

    Robocopy can be a bit… finicky.

    clip_image001

    clip_image002

    6. Copy the contents of the exported folder, both the database and xml, to the downstream server and save them in a temporary folder on the volume that contains the populated file data.

    Sample database file copy command:

    Robocopy.exe "H:\Dfsrclone" "\\SRV02\h$\dfsrclone" /B

    image

    7. On the downstream server SRV02, ensure that you correctly performed preseeding by using the Get-DfsrFileHash cmdlet to spot-check folders and files, and then compare to the upstream copies.

    This sample shows hashes for all the files beginning with “p”:

    PS C:\> Get-DfsrFileHash \\SRV01\H$\RF01\pri*

    PS C:\> Get-DfsrFileHash "\\SRV02\H$\RF01\pri*"

    Sample output showing an easy “eyeball comparison”

    image

    I recommend you run this on multiple small file subsets and at a few subfolder levels. There are many other examples of using the new Get-DfsrFileHash cmdlet here on TechNet already, including using the compare-object cmdlet to get fancy-schmancy.

    8. Ensure that the System Volume Information\DFSR folder does not exist on this downstream SRV02 server H: drive.

    Important: Naturally, this server must not already be participating in replication on that volume and if it is, you cannot clone into it.

    Sample (note: you may need to stop the DFSR service, run this, and then start the DFSR service):

    RD 'H:\System Volume Information\DFSR' –Recurse -Force

    Note: When re-using existing files that were previously replicated, you are likely to run into some benign errors when running this command due the MAX_PATH limitations of RD, where some of the Staging folder contents will be too long to delete. You can ignore those warnings, or if you want to clean out the folder completely, you can use this workaround:

    A. Create an empty folder on H: called "H:\empty"

    B. Run the following command:

    robocopy h:\empty "h:\system volume information\dfsr" /MIR

    C. Delete the now empty “system volume information\DFSR” folder after the robocopy command completes.

    9. Import the cloned database on SRV02. For example:

    Import-DfsrClone -Volume H: -Path "H:\Dfsrclone"

    image

    10. Wait for a DFSR Informational Event 2404 in the DFS Replication Event Log, which indicates that the import completed successfully. As you can see from the sample outputs, there are four event IDs of note when importing: 2412, 2416 (there may be many of these, they are progress indicators), 2418, and finally 2404.

    image

    11. Add the downstream SRV02 server as a member of the replication group using Add-DfsrMember, set its membership using Set-DfsrMembership for the -ContentPath matching H:\rf01, and create bi-directional replication connections between the upstream and downstream servers using Add-DfsrConnection.

    Add-DfsrMember -GroupName "rg01" -ComputerName srv02

    Add-DfsrConnection -GroupName "rg01" -SourceComputerName srv01 -DestinationComputerName srv02

    Set-DfsrMembership -GroupName "rg01" -FolderName "rf01" -ComputerName srv02 -ContentPath "h:\rf01" 

    Update-DfsrConfigurationFromAD srv01,srv02

    Note in the sample output how I use Get-DfsrMember in a pipeline to force AD polling operations on all members in the RG01 replication group, instead of having to run for each server. Imagine how much easier this will make administering environments with dozens or hundreds of DFSR nodes.

    image

    image

    12. Wait for the DFSR informational event 4104, which indicates that the server is now normally replicating files. Unlike your previous experience, there will not be a preceding 4102 even when enabling replication of a cloned volume. If there are any changed files on the upstream server since you performed export cloning, those files will replicate inbound to the downstream server authoritatively and you will see 4412 conflict events. If you allowed users to modify data on the downstream server – and again, you shouldn’t - while cloning operations were ongoing, those files will conflict (and lose) or move to the preexisting folder, and any files the user had deleted will replicated back in again from the upstream. This is identical to classic initial sync behavior.

    Cheat sheet

    Now that you have tried out the controlled scenario once, here is a cut down “quick steps” version you can use for further testing with those F: and G: drives on your own; once you use those up, you will need to remove the server from replication for those volumes in order to try some more experimentation with things like a 3rd server or cloning from an existing replicated folder.

    In this case, I am using the F: drive with its RF02 replicated folder in the RG02 replication group. Keep in mind – you don’t have to keep creating new RGs and we support cloning multiple custom writable RFs on a volume. These are just simplified walkthroughs, after all.

    On the upstream SRV01 server:

    New-DfsReplicationGroup "RG02" | New-DfsReplicatedFolder -FolderName "RF02" | Add-DfsrMember -ComputerName SRV01

    Set-DfsrMembership –GroupName "RG02" -ComputerName SRV01 -ContentPath F:\Rf02 -PrimaryMember $True -FolderName "RF02"

    Update-DfsrConfigurationFromAD

    Get-WinEvent “Dfs replication” MaxEvents 10 | fl

    New-Item -Path "f:\Dfsrclone" -Type Directory

    Export-DfsrClone -Volume f: -Path "f:\Dfsrclone"

    Robocopy.exe "F:\RF02" "\\SRV02\F$\RF02" /E /B /COPYALL /R:6 /W:5 /MT:64 /XD DfsrPrivate /TEE /LOG+:preseed.log

    Robocopy.exe f:\Dfsrclone \\srv02\f$\Dfsrclone

    On the downstream SRV02 server (note: you may need to stop the DFSR service to perform the first step; be sure to start it up again so that you can run the import)

    RD "H:\System Volume Information\DFSR" –Force -Recurse

    Import-DfsrClone -Volume C: -Path "f:\Dfsrclone"

    Get-WinEvent “Dfs replication” MaxEvents 10 | fl

    Add-DfsrMember -GroupName "RG02" -ComputerName "SRV02" | Set-DfsrMembership -FolderName "RF02" -ContentPath "f:\Rf02"

    Add-DfsrConnection -GroupName "RG02" -SourceComputerName "SRV01" -DestinationComputerName "SRV02"

    Update-DfsrConfigurationFromAD SRV01,SRV02

    Get-WinEvent “Dfs replication” MaxEvents 10 | fl

    Some simple troubleshooting

    While the future TechNet content on DB cloning contains a complete troubleshooting section, here are some common issues seen by first-time users of this new feature:

    Symptom

    Export-DfsrClone does not show RootFolderPath or PreseedingHint output for SYSVOL or read-only replicated folders. After running Import-DfsrClone, SYSVOL and read-only replicated folders are not imported.

    Cause

    DFSR cloning does not support SYSVOL or read-only replicated folders in Windows Server 2012 R2. Those folders are skipped by cloning. This behavior is by design.

    Resolution

    Configure replication of read-only replicated folders using classic initial sync. Configure SYSVOL by promoting domain controllers normally.

     

    Symptom

    Export-DfsrClone does not show RootFolderPath and PreseedingHint output for one or more custom replicated folders. After running Import-DfsrClone, not all custom replicated folders are imported.

    Cause

    DFSR cloning does not support replicated folders that are currently in initial sync or initial building. Those replicated folders are skipped by cloning.

    Resolution

    Ensure that all replicated folders on a volume are in a normal, non-initial building, non-initial synchronizing state. Any replicated folders that did not get DFSR event 4112 (primary server) after initial build started, or event 4104 (non-primary server) after initial sync completed, are not capable of cloning yet. If your event logs have wrapped, you can use WMI to determine if a replicated folder is ready to clone:

     

    PS C:\> Get-WmiObject -Namespace "root\Microsoft\Windows\DFSR" -Class msft_dfsrreplicatedfolderinfo -ComputerName <some server> | ft replicatedfoldername,state -auto –wrap

     

    Symptom

    Import-DfsrClone fails with errors: “Import-DfsrClone : Could not import the database clone for the volume h: to "H:\dfsrclone". Confirm that you are running in an elevated Windows PowerShell

    session, the DFSR service is running, and that you are a member of the local Administrators group. Error code: 0x80131500. Details: The WS-Management service cannot process the request. The WMI service or the WMI provider returned an unknown error: HRESULT 0x80041001”

    Cause

    You do not preseed the replicated folders onto the destination volume with the same name and relative path.

    Resolution

    Ensure that you preseed the source replicated folders onto the destination volume using the same folder names and relative paths (i.e. if the source replicated folder was on “d:\dfsr\rf01”, the destination volume must contain <volume>:\dfsr\rf01”

     

    Symptom

    DFSR event 2418 shows a significant mismatch count. Cloning takes as long as classic non-preseeded initial sync.

    Cause

    Files were not preseeded onto the destination server correctly or at all.

    Resolution

    Validate your preseeding technique and results. Reattempt the export and import process.

     

    Symptom

    Export-DfsrClone never completes or returns any output when using–Validation Basic or not specifying -Validation.

    Cause

    Code defect in Windows Server 2012 R2 Preview build only, when cloning more than 3100 files on a volume.

    Resolution

    This is a known issue in the Preview build. This was resolved in later builds. As a workaround, limit the number of files replicated with basic validation to under 3,100 per volume. If you wish to see the cloning performance with a larger dataset, use 3,100 much larger sample files (such as ISO, CAB, MSI, VHD, or VHDX files). Alternatively, use validation level none (0) instead of basic.

    Where you can learn more

    We have a comprehensive set of cloning and preseeding TechNet content on the way, as well as updates to the DFSR FAQ. These include steps on cloning an existing replica, dealing with hub servers that have many unique replicated folders from branch offices, using cloning to recover a corrupted database, and replacing or upgrading servers. Not to mention the new supported DFSR size limits!

    Once those are public, I will update this post with live links.

    - Ned “hmmm, I didn’t make any Star Wars references after all” Pyle


    DFS Replication in Windows Server 2012 R2: Restoring Conflicted, Deleted and PreExisting files with Windows PowerShell

    $
    0
    0

    Hi folks, Ned here again. Today I talk about a new feature in Windows Server 2012 R2 DFSR, restoring preserved files. DFSR has had conflict, deletion, and preexisting file handling since its release, but until now, we required out of band tools to recover those files. Those days are done: we now have Get-DfsrPreservedFiles and Restore-DfsrPreservedFiles.   

    Before we begin

    Readers of my posts on AskDS and FileCab know I like to take the edge off. I figure everyone should learn we aren’t a corporate hive mind, just a collection of humans striving to make great software. For now though, I’ll keep the cheap laughs to a minimum, as when you’re restoring data it’s usually causing some hair-pulling and tears.

    Let’s get to it!

    What are preserved files?

    DFSR uses a set of conflict-handling algorithms during initial sync and ongoing replication to ensure that the appropriate files replicate between servers, as well as to preserve remote deletions.

    1. Conflicts - During non-authoritative initial sync, cloning, or ongoing replication, losing files with the same name and path modified on multiple servers move to: <rf>\Dfsrprivate\ConflictAndDeleted
    2. PreExisting - During initial sync or cloning, files with the same name and path that exist only on the downstream server move to: <rf>\Dfsrprivate\PreExisting
    3. Deletions - During ongoing replication, losing files deleted on a server move to the following folder on all other servers: <rf>\Dfsrprivate\ConflictAndDeleted

    image

    The ConflictAndDeleted folder has a 4GB “first in/first out” quota in Windows Server 2012 R2 (it’s 660MB in older operating systems). The PreExisting folder has no quota.

    When content moves to these folders, DFSR tracks it in the ConflictAndDeletedManifest.xml and PreExistingManifest.xml. DFSR mangles all files and folders in the ConflictAndDeleted folder with version vector information to preserve uniqueness. DFSR also mangles the top-level files and folders in the PreExisting folder, but leaves all the subfolder contents unaltered.

    The result is that it can be difficult to recover data, because much of it has heavily obfuscated names and paths. While you can use the XML files to recover the data on an individual basis, this doesn’t scale. Moreover, if you just set up replication and accidentally chose the empty replicated folder as primary, you want all those files back out of preexisting immediately with a minimum amount of fuss.

    How to create some preserved files

    In this walkthrough, I create a replicated folder that intentionally contains conflicted, deleted, and preexisting content for restoration testing. To follow along, you need a couple of Windows Server 2012 R2 computers with DFSR installed: let’s call them SRV01 and SRV02.

    1. On server SRV02 only, create C:\RF01 and add some test files (such as all the contents of the C:\Windows\SYSWOW64 folder). Make sure it has some subfolders with files in it.

    2. Create a new replication group named RG01 with a single replicated folder named RF01. Add SRV01 and SRV02 as members to the RG and replicate the C:\RF01 directory replicated folder. You must specify SRV01 the primary (authoritative) server in this case. I recommend the new DFSR Windows PowerShell for this, so all my examples below go that route.

    Important:Choosing SRV01 to be primary is an “intentional mistake” in this scenario, as I want to create preexisting files on the downstream server. Ordinarily, you would make SRV02 primary, as it contains all the data to replicate and SRV01 contains none.

    New-DfsReplicationGroup -GroupName "RG01" | New-DfsReplicatedFolder -FolderName "RF01" | Add-DfsrMember -ComputerName SRV01,SRV02

    Add-DfsrConnection -GroupName "RG01" -SourceComputerName srv01 -DestinationComputerName srv02

    Set-DfsrMembership -GroupName "rg01" -FolderName "rf01" -ComputerName srv01 -ContentPath c:\rf01 –PrimaryMember $true

    Set-DfsrMembership -GroupName "rg01" -FolderName "rf01" -ComputerName srv02 -ContentPath c:\rf01

    Update-DfsrConfigurationFromAD srv01,srv02

    3. Verify that replication completes – in this case, SRV01 logs DFSR event 4112 and SRV02 logs DFSR event 4104. All files in c:\rf01 will appear to vanish from SRV02.

    4. Create a test BMP and RTF file on SRV01in C:\RF01. Create a subfolder, and then create another BMP and RTF test file in that subfolder. Make sure those files replicate.
    Note: BMP and RTF are convenient choices because they are default file creation options in the Windows Explorer shell. In addition, unlike Notepad with TXT files, their editors follow standard conventions for opening and closing files.

    image

    5. Pause replication between SRV01 and SRV02 using the Suspend-DfsReplicationGroup cmdlet. For instance, to pause for 5 minutes:

    Suspend-DfsReplicationGroup -GroupName rg01 -SourceComputerName srv01 -DestinationComputerName srv02 -DurationInMinutes 5

    Suspend-DfsReplicationGroup -GroupName rg01 -SourceComputerName srv02 -DestinationComputerName srv01 -DurationInMinutes 5

    6. Modify the top-level BMP file on both servers (the same file), making sure to modify the SRV02 copy first (i.e. earlier, so that it will lose the conflict).

    7. Let replication resume from the suspension in step 5.

    8. Create another BMP file on SRV01, and then delete that file and the subfolder you created in step 4, along with its contents.

    9. Validate that DFSR deletes the files from both servers that the file you modified on both servers now holds the version that came from SRV01.

    10. On SRV01, manually recreate the same-named file that you previously deleted in step 8, ensure it replicates to SRV02, then delete it from SRV01 and verify that the deletion replicates. This creates a scenario with multiple deleted file versions.

    11. Copy the following folders and files elsewhere as a backup on SRV02:

    C:\RF01\DfsrPrivate\ConflictAndDeleted

    C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml

    C:\RF01\DfsrPrivate\PreExisting

    C:\RF01\DfsrPrivate\PreExistingManifest.xml

    For example:

    Robocopy.exe c:\rf01\dfsrprivate c:\rf01backup\dfsrprivate /mt:64 /b /e /xd staging /copy:dt

    Note:Restoring preserved files does not require running the DFSR service or an active RG/RF. You can operate on the preserved data independently on Windows Server 2012 R2, including locally on a Windows 8.1 Preview computer running RSAT with a local copy of the DfsrPrivate folder. Backing up the preserved files prior to restoration is a best practice, since restore operations are destructive by default (they move files instead of copying).

    Now I have some conflicts, some deletions, and some preexisting files and folders, all on SRV02 in the c:\rf01\dfsrprivate folder. Let’s go to work.


    image
       image

    image

    How to inventory preserved files

    The Get-DfsrPreservedFiles cmdlet tells you everything you want to know about files and folders in the ConflictAndDeleted and PreExisting stores, according to the XML manifests. Let’s inventory the preserved files, in order to see which files DFSR saved and some details about them. All it needs is a single parameter called–path that points to the manifest.

    1. On SRV02, see the conflicted and deleted files:

    Get-DfsrPreservedFiles –Path C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml

    image 

    2. On SRV02, see the preexisting files:

    Get-DfsrPreservedFiles -Path C:\RF01\DfsrPrivate\PreExistingManifest.xml

    image

    3. On SRV02, retrieve only RTF files with their original path and new names:

    Get-DfsrPreservedFiles –Path C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml | Where-Object path -like *.rtf | ft path,preservedname

    image

    4. On SRV02, see only conflicted files, when the conflict occurred, and what server originated the conflict:

    Get-DfsrPreservedFiles –Path C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml | Where-Object PreservedReason -like UpdateConflict | ft path,preservedtime,UID -auto 

    ConvertFrom-DfsrGuid -Guid 40A4EEBF-110B-4F40-990C-B5ADBCA97725 -GroupName rg01

    image

    No longer are you left wondering when a user deleted a file or from which server, nor if a particular file is still in the ConflictAndDeleted cache on the other nodes. It’s all there at your fingertips.

    How to restore preserved files

    The Restore-DfsrPreservedFiles cmdlet can restore one or more files and folders in the ConflictAndDeleted and PreExisting stores, according to the XML manifests. All it needs is the manifest and a decision about how to recover the files and where to put them, but there are some additional options:

    • -Path - path of a manifest XML file in the replicated folder
    • -RestoreToPath or -RestoreToOrigin– should the data restore to a new arbitrary path or to its original path
    • -RestoreAllVersions(optional)– should all versions of conflicted and deleted files restore with an appended time-date stamp, or just the latest version. Default is FALSE, so only latest files restore
    • -CopyFiles(optional)– should the files move or copy. Default is FALSE, so files move.
    • -AllowClobber(optional)– should restore overwrite existing files in the destination

    Important: By default, this cmdlet moves all files preserved files from PreExisting. It also moves the latest version of files from ConflictAndDeleted and removes the remaining ones (this is intentional, as otherwise every time you ran this cmdlet, you would restore an increasingly older version of conflicted files). We strongly recommend backing up the ConflictsAndDeleted and Preexisting folder before you use this cmdlet!

    Let’s look at some examples of recovery.

    Note: some testing can “break” the test environment we created above, because there will no longer be any files to restore once you move them. I recommend you make a few backup copies of your saved DfsrPrivate folder so you can go through this a few times.

    1. On SRV02, move all preexisting files to their original location:

    Restore-DfsrPreservedFiles -Path "C:\RF01\DfsrPrivate\PreExistingManifest.xml" –RestoreToOrigin

    image

    This is very quick, as I’m moving the data locally on the same volume. Look here when I restore ~1GB and 4,500 files and folders of SYSWOW64:

    image

    I am back in business in 2.3 seconds later, here.

    2. On SRV02, copy all conflicted and deleted files to the original location, preserving all versions of the files so that users can decide which ones to keep:

    Restore-DfsrPreservedFiles -Path "C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml" -RestoreToOrigin -RestoreAllVersions -CopyFiles

    image

    3. On SRV02, move all versions of the preserved files verbosely to an alternate location for later analysis and manual restore:

    Restore-DfsrPreservedFiles -Path "C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml" –RestoreToPath –RestoreAllVersions –Force

    image

    Note: an unintended artifact of this type of “alternate location” operation is the creation of an empty set of folders based on the RF itself, but at one level deeper. You can ignore or delete that top folder named after the RF itself; it will contain only empty folders. This is something we might fix later, but it is totally benign and cosmetic.

    4. On SRV02, move the newest version of all conflicted and deleted files to the original location, removing all versions of moved files from the ConflictAndDeleted folder, skipping any files that already exist and leaving them in the ConflictAndDeleted folder:

    Restore-DfsrPreservedFiles -Path "C:\RF01\DfsrPrivate\ConflictAndDeletedManifest.xml" -RestoreToOrigin

    image

    As you can tell, these cmdlets are very simple to use and can meet most data restoration needs. Now if Tim from Sales accidentally deletes his shared document that he started this morning – and is therefore not in the nightly backup – you have one more way to get it back.

    I hope this is another good example of why you should move to Windows Server 2012 R2.

    - Ned “self-preservation” Pyle

    Going to TechEd Australia?

    Dell PowerEdge VRTX: 4-Node Cluster-in-a-Box That Can Be Deployed in 45 Minutes

    $
    0
    0

    Hi Folks –

    Dell’s latest server, the PowerEdge VRTX Shared Infrastructure Platform is an incredible cluster-in-a-box that delivers highly-available services and storage in one beautiful package. There are some great innovations in this chassis, for sure; the shared PCI bus and virtualized PERC8 storage controllers are really ground breaking stuff. However, what really has me impressed is the price. On the Dell website today, they have this chassis and two blades available for under $10,000, which is about as low a price as you can find for a Windows Server 2012 cluster with hardware RAID!

     

    image

    Perfect for Office Environments

    Dell bills the PowerEdge VTRX as a solution that “redefines office IT.” And I’m inclined to agree: its innovative, converged design is based on a modular chassis that supports 4 compute nodes and up to 48TB of storage in a 5U rack-mount enclosure—or you can turn it sideways and it’s a tower that fits under your desk. It’s great for small businesses, which often face space constraints, hardware sprawl, low tolerance for heat and noise, and a lack of datacenter-class power or cooling. The PowerEdge VRTX is built to address all these issues, making it easy to deploy compute and storage resources wherever you need them.

    A Four-Node Cluster-in-a-Box That Can Be Deployed in 45 Minutes

    The speed with which the PowerEdge VRTX can be deployed is just as impressive as its physical design. Last year, I blogged about how the OEM Appliance OOBE (out-of-the-box experience) originally developed for Windows Storage Server was being included in Windows Server 2012 (Standard and Datacenter Editions) as well as both editions of Windows Storage Server 2012. At that time, it supported the deployment of standalone servers or 2-node failover clusters. This spring, the Windows team released an update that adds support for 4-node clusters. During its development, we worked closely with Dell using original prototypes of the hardware.

    Dell is including that update with the PowerEdge VRTX to deliver a four-node cluster-in-a-box that can be deployed in 45 minutes. After powering-on the system, the user can simply follow the sequence of tasks provided by Dell, which include configuring the network, joining a domain; provisioning some storage, and creating the cluster. Here is a screenshot that shows the Initial Configuration Tasks window:

    image

    And here’s a great shot of the Dell booth at TechEd, where they highlighted the PowerEdge VRTX and how quickly it can be deployed.

    image

    Virtualized Storage Controller

    Dell’s innovative PCI-e virtualization technology and PERC8 storage controller enable simultaneous access to storage from each server node. This reduces the overhead in accessing shared storage and also reduces the total storage needed by allowing each server node to share the same physical storage resources.

    Such innovations make the PowerEdge VRTX a great Hyper-V host. You can easily scale-up RAM, add more CPUs, and install additional PCI-e add-in cards (including several 10GbE options from Intel and Broadcom) to run numerous VMs at once, handle huge SQL instances, or support thousands of IIS workloads—all while migrating running VMs from one node to another at lightning speed. You can easily assign the shared PCI-e slots to any of the nodes using the system’s Chassis Management Controller (CMC).

    Dell recently released a reference architecture to help customers deploy virtualized desktop infrastructure (VDI) workloads on the PowerEdge VRTX using Hyper-V. The reference architecture includes sizing guidance for two default configurations: two M620 blades and 15 disks, which is designed to support 250 virtual Windows 8 desktops; and four M620 blades and 25 disks, which is designed to support 500 virtual Windows 8 desktops. You can configure and price these solutions using Dell’s Solutions Configurator.

    System Specifications

    Detailed system specifications for the PowerEdge VRTX can be found on the Dell website or the downloadable data sheet. Here are a few highlights:

    image


    Recap

    Put simply, the PowerEdge VRTX has the potential to be a game-changer when it comes to office IT. I’m hard-pressed to think of another system that can match all the benefits that the PowerEdge VRTX has to offer, including:

    • Optimized for office environments. The PowerEdge VRTX is a compact, shared infrastructure platform with office-level acoustics and 110V power support. It provides front-access KVM, USB, LCD display, and optional DVD-RW, and can be deployed as a standalone tower or a 5U rack enclosure.
    • Converged servers, storage and networking. The PowerEdge VRTX combines servers, storage, and networking to provide impressive performance and capacity. A single chassis supports up to four dual-socket PowerEdge M520 or M620 server blades, up to 8 PCI devices, and up to 48 TB storage capacity. It is very easy to add additional blades to scale the system if you start with 2 or 3 blades.
    • Flexible shared storage. All four server nodes have access to low-latency, shared internal storage, making the PowerEdge VRTX ideal for virtualization and clustering, highly economical, and easier to manage than traditional SANs. A single chassis can support up to 12 3.5” HDDs (48TB max) to scale for capacity or up to 25 2.5” HDDs (30TB max) to scale for performance.
    • Integrated networking and flexible I/O. The PowerEdge VRTX includes an embedded gigabit Ethernet switch, eliminating the need to purchase a separate networking device. An optional pass-through Ethernet module option with eight GbE ports supports up to 8Gb aggregate bandwidth. PCIe resources are shared across the compute nodes within the chassis.
    • Simple, efficient systems management. Arich, unified system management console reduces administration time and effort, enabling you to deploy, monitor, update, and maintain the system through a single pane-of-glass that covers servers, storage and networking. Dell OpenManage 1.2 with Chassis Management Controller (CMC) and GeoView enable you to monitor all PowerEdge VRTX systems, anywhere on your network.
    • Seamless management integration. PowerEdge VRTX systems management integrates with major third-party management tools, allowing you to use what you already know and own. This makes it easy to deploy VRTX into infrastructures already managed by Dell OpenManage or third-party management solutions, such as Microsoft System Center and VMware vCenter.

    If the above has you interested in Dell’s PowerEdge VRTX, the Shared Infrastructure page on the Dell website is a good place to start. I think you’ll be impressed… I know that I am.

    Cheers,
    Scott M Johnson
    Senior Program Manager
    Windows Server OEM Appliance OOBE

    Managing iSCSI Target Server through Storage Cmdlets

    $
    0
    0

    Context

    Windows Server 2012 R2 ships with a rich set of standards-compliant storage management functionality. This functionality was originally introduced in Windows Server 2012, and you should reference Jeff’s excellent blog that introduced related concepts.

    iSCSI Target Server on its part shipped its SMI-S provider as part of the OS distribution for the first time in Windows Server 2012 R2. For more details on its design and features, see my previous blog post iSCSI Target Server in Windows Server 2012 R2.

    I will briefly summarize here how the iSCSI Target Server SMI-S provider fits into Storage management ecosystem, please refer to Jeff’s blog post for the related deep-dive discussion. Storage cmdlets are logically part of the Storage Management API (SM-API) layer, and the ‘Windows Standards-based Storage Management Service’ plumbs the Storage cmdlet management interactions through to the new iSCSI Target Server SMI-S provider. This blog post is all about how you can use the new SMI-S provider hands-on, using the SM-API Storage cmdlets. Note that iSCSI Target Sever can alternatively be managed through its native iSCSI-specific cmdlets, and related WMI provider APIs. While the native approach allows you to comprehensively manage all iSCSI-specific configuration aspects, the Storage cmdlet approach helps you normalize on a standard set of cmdlets across different technologies (e.g. Storage Spaces, 3rd party storage arrays).

    Before we jump into the discussion of Storage cmdlet-based management, you should keep two critical caveats in mind:

    • Jeff’s caution about having multiple management clients potentially tripping up each other; to quote –

    “You should think carefully about where you want to install this service – in a datacenter you would centralize the management of your resources as much as practicable, and you don’t want to have too many points of device management all competing to control the storage devices. This can result in conflicting changes and possibly even data loss or corruption if too many users can manage the same arrays.”

    • You must decide how you want to manage a Windows-based iSCSI Target Server starting from the time the feature is installed on Windows Server. To begin with, you are free to choose from either WMI/PowerShell, or SM-API/SMI-S. But once you start using one management approach, you must stick with that approach until the iSCSI Target Server is decommissioned. Switching between the management approaches could leave the iSCSI Target Server in an inconsistent and unmanageable state, and potentially could even cause data loss.

    For most users, the compelling reason for managing an iSCSI Target Server through SMI-S is usually one of the following:

    1. They have existing scripts that already are written to the Windows Storage cmdlets and the Windows SMI-S cmdlets, and want to use the same scripts to manage Windows Server-based iSCSI Target Server, or,
    2. They use 3rd party storage management products that consume the Storage Management API (SM-API) and they plan to manage Windows Server-based iSCSI Target Server using that same 3rd party software, or,
    3. (Perhaps most likely) They use SCVMM to manage iSCSI Target Server-based storage through SMI-S. This can be accomplished using SCVMM-specific cmdlets and UI, and is covered in detail elsewhere. So it will not be the focus of this blog post, we will focus only on non-SCVMM management approach in this blog post.

    Relating Terminology

    Let us do a quick review of SM-API/SMI-S concepts so you can intuitively relate them to native Windows terminology as we move into hands-on discussion:

    SM-API or SMI-S concept

    iSCSI Target Server implementation term

    More Detail

    SMI-S Provider

    WMI provider

    Manageability end point for the iSCSI Target Server; an iSCSI Target Server SMI-S provider is actually also built off WMI architecture under the covers.

    Storage Pools

    Hosting Volume where the VHD files are stored

    Storage pools in SMI-S subdivide the total available capacity in the system into groups as desired by the administrator. In the iSCSI Target Server design, each virtual disk is persisted on a file system hosted on a Windows volume.

    Storage Volume

    iSCSI Virtual Disk (SCSI Logical Unit)

    An SMI-S Storage Volume is the allocation of storage capacity exposed by the storage system - a storage volume is provisioned out of a storage pool. Windows Server Storage Service implementation calls this a ‘Virtual Disk’. iSCSI Target Server design calls this an iSCSI virtual disk, see New-IscsiVirtualDisk

    Masking operation

    Removal of a mapping

    Masking of a storage volume removes access to that SCSI LU from an initiator. In iSCSI Target Server design, it is not possible to selectively mask a single LU from an individual initiator, although a single LU can be removed from the SCSI Target (Remove-iSCSIVirtualDiskTargetMapping). The access privilege can then be removed from an initiator at the target scope.

    Unmasking operation

    Adding a mapping

    Unmasking of a storage volume grants access to that SCSI LU to an initiator. In iSCSI Target Server design, it is not possible to selectively unmask a single LU from an individual initiator, although a single LU can be added to SCSI Target (Add-iSCSIVirtualDiskTargetMapping). The access privilege can then be granted to an initiator at the target scope.

    SCSI Protocol Controller (SPC)

    SCSI Target

    SCSI Protocol controller refers to the initiator view of the target. In Windows Server Storage Service implementation, this is logically equivalent to a masking set, which then iSCSI Target Server realizes as a SCSI Target, see New-IscsiServerTarget

    Snapshots

    Snapshots

    The terminology is the same on this one, but there are a couple of critical differences to keep in mind between the volsnap-based iSCSI virtual disk snapshots that you can create with a Checkpoint-IscsiVirtualDisk, versus the Diff VHD-based snapshot on the original VHD that you can create with a New-VirtualDiskSnapshot. The former is a read-only snapshot, whereas the latter is a writable snapshot. And be aware that you cannot manage a snapshot taken in one management approach (say, WMI) via the tools in the other approach (say, SMI-S).

    Storage Subsystem

    iSCSI Target Server

    This is a straightforward mapping for standalone iSCSI Target Servers where the iSCSI SMI-S provider implementation is simply an embedded SMI-S provider just for that target server. In the case of a clustered iSCSI Target Server however, the SMI-S provider at the client access point reports not only the storage subsystems (iSCSI Target Server resource groups) owned by that cluster node, but also any additional iSCSI Target Server resource groups owned by rest of the failover cluster nodes – reporting each as a storage subsystem. Put differently, the SMI-S provider then acts like an embedded provider for that cluster node, and as a proxy SMI-S provider for the rest of that cluster.

    Register SMI-S Provider on a Management Client

    To register an SMI-S provider, you need to know the provider’s URI – it is the machine name for a standalone target and the cluster resource group name in the case of a clustered iSCSI Target Server – and credentials for a user account that is in that target server’s local administrators security group. In the following example, the SMI-S provider can be accessed on the machine “fsf-7809-09” and the user account with administrative privileges is “contoso\user1”. Get-Credential cmdlet prompts for the password at run time (you can reference Get-Credential page for other more scripting-friendly, albeit less secure, options to accomplish the same).

    image

    Discover the Storage Objects

    After registering the provider, you can now update storage provider cache to get an inventory of all the manageable storage objects through this SMI-S provider:

    image

    You can then list the storage subsystem and related details. Note that although the following screen shots show items related to Storage Spaces, they are unrelated to iSCSI Target Server SMI-S provider. iSCSI Target Server SMI-S provider items are highlighted in green.

    image

    You can inspect the available storage pools and filter by the friendly name, as shown in the example. Notice that the hosting volumes, which you already know iSCSI Target Server reports as storage pools, carry friendly names that include respective drive letters on the target server. Also notice the Primordial storage pool, which effectively represents entire capacity available on the iSCSI Target Server. However, keep in mind that you can create SCSI Logical Units only out of what SMI-S calls “concrete storage pools”, i.e. only from pools which have the ‘IsPrimordial’ attribute set to false in the following screen shot.

    image

    Create Storage Objects

    The first operation we show carves out a new logical disk out of an existing concrete storage pool. You can use one of two cmdlets to create a new virtual disk: New-VirtualDisk and New-StorageSubsystemVirtualDisk. Technically, iSCSI Target Server SMI-S provider works fine with either cmdlet and we will show examples for both, although you probably want to use the New-VirtualDisk so you can intentionally select the storage pool to provision the storage volume from. New-StorageSubsystemVirtualDisk in contrast auto-selects the storage pool to provision the capacity from.

    image

    Or you can provision the possible maximum-sized virtual disk by using “-UseMaximumSize” parameter as shown:

    image

    If you prefer to use the New-StorageSubsystemVirtualDisk cmdlet, you need to specify the storage subsystem parameter, and in the example below, you can see it auto-selected a storage pool in the selected subsystem - the “iSCSITarget: FSF-7809-09: C:” pool.

    image

    With the New-VirtualDiskSnapshot cmdlet, you can take a snapshot of a virtual disk.

    image

    To create masking set for a storage subsystem, you can use New-MaskingSet. You must include at least one initiator access in the new masking set. You can also map one or multiple virtual disks to the new masking set. An iSCSI initiator is identified by its iqn name and the virtual disks through their names. The script below creates a new masking set and adds one initiator and two virtual disks. And then we will query the new masking set to confirm the details.

    image

    Modify and Remove the Storage Objects

    With the Resize-VirtualDisk cmdlet, you can expand an existing virtual disk as shown in the following example. Note however that you will still need to extend the partition and the volume to be able to make the additional capacity usable.

    image

    You can also modify the masking set that you’ve just created, by adding additional virtual disks or additional initiators to it. Look at the following examples. Note however that you do not really want to share a virtual disk with a file system across multiple initiators unless the initiators (hosts) are clustered, else the setup will inevitably cause data corruption sooner or later!

    image

    You can of course also remove the masking set and virtual disks that you just created, as we illustrate in the following examples. Further, note also the order of operations – you have to remove a virtual disk first from masking sets (or remove the masking sets), and then delete the virtual disk.

    image

    Finally, when you no longer need to manage the iSCSI Target Server from this management client, you can unregister the SMI-S provider as shown in the following example.

    image

    I want to acknowledge my colleague Juan Tian who had helped me with all the preceding Windows PowerShell examples.

    Finally, I sincerely hope that you now have the tools you need to work with an iSCSI Target Server SMI-S provider. Give it a try and let me know how it’s working for you!

    iSCSI Target Server in Windows Server 2012 R2 for VMM Rapid Provisioning

    $
    0
    0

    Context

    iSCSI Target Server shipped its SMI-S provider as part of the OS distribution for the first time in Windows Server 2012 R2. For more details on its design and features, see my previous blog post iSCSI Target Server in Windows Server 2012 R2.

    System Center 2012 SP1 Virtual Machine Manager (VMM) and later versions manage the storage presented from an iSCSI Target Server for provisioning block storage to Hyper-V hosts. VMM configuration guidance for managing the iSCSI Target Server running on Windows Server 2012 is available in this TechNet page. This guidance is still accurate for Windows Server 2012 R2 with the two following exceptions.

    1. iSCSI Target Server SMI-S provider is now included as part of the OS distribution, so you no longer need to install it from VMM media. In fact, only the SMI-S provider included in Windows Server 2012 R2 distribution is the compatible supported provider. Further, when you install the iSCSI Target Server feature, the right SMI-S provider is transparently installed.
    2. SAN-based Rapid Provisioning scenario of VMM requires one additional step to work with iSCSI Target Server

    The rest of this blog post is all about #2.

    VMM SAN-based Rapid Provisioning

    VMM SAN-based rapid provisioning, as the name suggests, helps an administrator rapidly provision new Hyper-V virtual machines. The key to this fast provisioning is copying the VHD files for the new virtual machine in the most efficient possible manner. In this case, VMM relies on iSCSI Target Server snapshot functionality to accomplish this. Specifically, iSCSI Target Server SMI-S provider exposes this snapshot functionality for usage by SM-API storage management framework, which VMM then uses to create iSCSI Virtual Disk snapshots. As a brief aside, check out my previous blog post for examples on how the same iSCSI SMI-S snapshot functionality can be used by a storage administrator directly via SM-API Storage cmdlets, outside of VMM.

    Let’s focus back on VMM though, especially on the snapshot-related VMM rapid provisioning work flow and what each of these steps mean to the iSCSI Target Server:

    1. Administrator creates, customizes, and generalizes (syspreps) the desired VM OS image on a storage volume, hosted on iSCSI Target Server storage
      • iSCSI Target Server perspective: It simply exposes a VHDX-based Virtual Disk as a SCSI disk to the connecting initiator. All the creation, customization and sysprep actions are simply I/Os on that SCSI Logical Unit (LU).
    2. Administrator mounts that SCSI LU on the VMM Library Server, let’s call it Disk-G for Golden, hosting the storage volume. Administrator also makes sure to mask the LU from any other initiators.
      • iSCSI Target Server perspective: Disk-G is persisted as a VHDX format file on the hosting volume, but the initiator (Library Server) does not know or care about this server-side implementation detail
    3. Administrator creates a VM template and associates the generalized SAN copy-capable OS image VHD files to this template. This process thus makes the template a SAN copy-capable VM template.
      • iSCSI Target Server perspective: This action is transparent to the iSCSI Target Server, it does not participate unless there are specific related I/Os to Disk-G
    4. From this point on, VMM can rapidly provision each new VM by creating a snapshot of Disk-G (say Disk-S1, Disk-S2 etc.) and assigning it to the appropriate Hyper-V host that will host the new VM guest being instantiated.
      • iSCSI Target Server perspective: For each disk snapshot taken via SMI-S, iSCSI Target Server creates a Diff VHDX file to store its content, so effectively:
        • Disk-G image parent VHDX file
        • Disk-S1 image Diff VHDX (Disk-G is parent)
        • Disk-S2 image Diff VHDX (Disk-G is parent)

    For a more detailed discussion of SAN-based VMM rapid provisioning concepts, see this TechNet Library article.

    The entire scenario of course works flawlessly both in Windows Server 2012 and Windows Server 2012 R2. However in Windows Server 2012 R2, it turns out the storage administrator needs to take one additional step between Steps #3 and #4 – let’s call it “Step 3.5” – in the preceding list. Let’s then discuss what exactly changed in Windows Server 2012 R2 and what the additional step is.

    iSCSI Target Server SMI-S Snapshots in Windows Server 2012 R2

    On each successful SMI-S snapshot request, iSCSI Target Server creates a Diff VHDX-based iSCSI Virtual Disk. In Windows Server 2012 R2, iSCSI Target Server realizes this through native Hyper-V APIs. In contrast, iSCSI Target Server used to have its own implementation of creating Diff VHD files back in Windows Server 2012 – see my discussion of redesigned persistence layer in Windows Server 2012 R2 in one of my earlier blog posts for more detail. The new Hyper-V APIs enforce that the parent VHDX file must not be open in read & write mode while the new Diff VHDX is being created. This is to ensure that the parent VHDX can no longer be written to, once the diff VHDX is created. Thus while creating Disk-S1/S2 iSCSI Target Server SMI-S snapshots in the example discussion, Disk-G cannot stay mounted for read/write by the Library Server. Disk-G must be unmounted and re-mounted as read-only disk first – otherwise, creation of snapshots Disk-S1 and Disk-S2 will fail.

    Now you might be wondering why this wasn’t an issue in Windows Server 2012-based iSCSI Target Server. iSCSI Target Server’s private implementation of Diff VHD creation in Windows Server 2012 did not enforce the read-only requirement on the parent VHD file, but the VMM Library server (initiator) always ensures that no more writes are performed on Disk-G once the sysprep process is complete. So the overall solution worked just fine in Windows Server 2012. With Windows Server 2012 R2 though, in addition to the same initiator behavior, iSCSI Target Server is effectively adding an additional layer of safety on the target (server) side to ensure writes are simply not possible at all on Disk-G. This is additional goodness.

    I have briefly alluded to the nature of the additional process step required for rapid provisioning, but here’s the complete list of actions within that additional step:

    “Step 3.5”:

    • Save the volume mount point (and the drive letter if applicable) and offline the iSCSI LU on the Library Server.
    • Unmap the LU from its current masking set (Remove-VirtualDiskFromMaskingSet). This ensures that the LU under the previous read/write access permissions can no longer be accessed by any initiator.
    • Re-add the same SCSI LU (Add-VirtualDiskToMaskingSet) back to the same masking set, albeit this time as read-only through the “-DeviceAccesses ReadOnly” PS parameter. This sets the disk access to Read-Only.
      • Note: Only one Library Server should have the volume mounted off that SCSI LU. Even if the Library Server is configured as a highly-available failover cluster, only one of the cluster nodes should have mounted the disk at one time.
    • Online the SCSI LU on the Library Server and restore its previous mount point (and drive letter, if applicable)

    Here is the good news. My colleague Juan Tian has written a sample Windows PowerShell script which takes a single VMM template name parameter and performs all the above actions in the “Step 3.5” in one sweep. The script should work without any changes if you run it with VMM administration credentials. Feel free to check it out at the bottom of the blog post, customize if necessary for your deployment, and be sure to run it as “Step 3.5” in the VMM rapid deployment work flow that I summarized above.

    Finally, let me wrap up this blog post with a quick version compatibility reference:

    VMM Version

    VMM Runs on

    Manages

    Compatible?

    VMM 2012 SP1

    WS2012

    iSCSI on WS2012

    Yes

    VMM 2012 SP1

    WS2012

    iSCSI on WS2012 R2

    Yes

    VMM 2012 R2

    WS2012 R2

    iSCSI on WS2012 R2

    Yes

    VMM 2012 R2

    WS2012

    iSCSI on WS2012 R2

    Yes

    VMM 2012 SP1

    WS2012 R2

    <Any>

    No

    Hope this blog post provided you all the required details you need to move to production with your Windows Server 2012 R2-based new iSCSI Target Server and VMM. Give it a try and let me know how it’s working for you!

     

    >>>>>>>>>SetLibraryServerLUToReadOnly.ps1 Windows PowerShell script>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

    # Description:

    # Script to set a VMM Library Server access of a SCSI Logical Unit (LU) to read-only access.

    # Library Server and the disk (SCSI LU) are identified based on the VM template name parameter

    # Script always offlines the disk, removes it from the masking set, and re-adds the disk back in as read-only.

    # Script finally re-mounts the newly-read-only disk on the Library Server.

    # Must be run with the VMM administration credentials

    #

    param([string] $VMTemplate = "")

    if(!$VMTemplate)

    {

    $VMTemplate = Read-Host "Enter the name of your Template: "

    }

    Write-Host "Get Template $VMTemplate"

    $libShare=(Get-SCLibraryShare -ID ((Get-SCVMTemplate -name $VMTemplate | Get-SCVirtualHardDisk).libraryshareid))

    $share=$libShare.path

    if ($share.count -lt 1)

    {

    Write-Host "Cannot find libraryshare!"

    exit 1

    }

    Write-Host "Get libraryshare $share"

    $path=(Get-SCVMTemplate -name $VMTemplate| Get-SCVirtualHardDisk).directory

    if ($path.count -lt 1)

    {

    Write-Host "Cannot find SCVirtualHardDisk!"

    exit 1

    }

    Write-Host "Get virtualdisk directory $path"

    $path2=$path.replace($share, "")

    $key="*"+$path2+"*"

    Write-Host "Get Key $key"

    $lib=($libShare.LibraryServer).FQDN

    if ($lib.count -lt 1)

    {

    Write-Host "Cannot find libraryserver!"

    exit 1

    }

    Write-Host "Get libraryserver $lib"

    $partition = Invoke-Command -computername $lib -scriptblock {get-partition } | where-object {$_.accesspaths -like $key}

    if (!$partition)

    {

    Write-Host "Cannot find disk partition!"

    exit 1

    }

    $disk = $partition | Invoke-Command -computername $lib -scriptblock {get-disk}

    if (!$disk)

    {

    Write-Host "Cannot find disk!"

    exit 1

    }

    #offline disk

    Write-Host "Offline disk ..."

    $disk | Invoke-Command -computername $lib -scriptblock {set-disk -isoffline $true}

    Write-Host "Offline disk completed!"

    Write-Host "Looking for disk.uniqueid - $disk.uniqueid"

    $vdisk = Get-VirtualDisk | where-object {$_.uniqueid -match $disk.uniqueid}

    if (!$vdisk)

    {

    Write-Host "Cannot find virtual disk!"

    exit 1

    }

    $ms = $vdisk | get-maskingset

    if (!$ms)

    {

    Write-Host "Cannot find maskingset!"

    exit 1

    }

    #remove virtual disk from masking set

    Write-Host "Call Remove-VirtualDiskFromMaskingSet ..."

    Remove-VirtualDiskFromMaskingSet -maskingsetuniqueid $ms.uniqueid -virtualdisknames $vdisk.Name

    Write-Host "Call Remove-VirtualDiskFromMaskingSet completed!"

    #add virtual disk back in masking set

    Write-Host "Call Add-VirtualDiskToMaskingSet ..."

    Add-VirtualDiskToMaskingSet -maskingsetuniqueid $ms.uniqueid -virtualdisknames $vdisk.Name -deviceaccesses ReadOnly

    Write-Host "Call Add-VirtualDiskToMaskingSet completed!"

    #online disk

    Write-Host "Online disk ..."

    $disk | Invoke-Command -computername $lib -scriptblock {set-disk -isoffline $false}

    Write-Host "Online disk completed!”

    Viewing all 268 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>