Quantcast
Channel: Storage at Microsoft
Viewing all 268 articles
Browse latest View live

VMware Setup for Server for NFS Cluster

$
0
0

In this tutorial, I’ll discuss how to setup VMware vSphere virtual machines with Windows server 2012 Server for NFS cluster as backend storage.

Background

VMware has supported running virtual machines hosted by its ESX server over the NFS v3 protocol. In this configuration, virtual machines running on VMware ESX have their VMDKs (virtual disks) located on a remote file share that is exported over the NFS protocol (Figure 1).

Figure 1

In this tutorial, I will cover two concepts: Windows server 2012 Server for NFS cluster setup and configuring VMware vSphere with Server for NFS cluster as storage. Explaining how to configure vSphere virtual machines with Server for NFS cluster can best be accomplished by way of a simple example. In this tutorial we'll consider the following infrastructure scenario:

  • Windows server cluster nodes running Server for NFS: zzqclustertest1, zzqclustertest2
  • Windows server storage server running iSCSI target server: zzqclusterstor
  • Mount point for iSCSI server: E:\
  • Windows cluster role network name: nfsclusterap
  • Name of Server for NFS share on the cluster: /share
  • vSphere vCenter IP address: 172.30.182.110

A volume is created over iSCSI target server and mounted on vSphere VM for the purpose of cluster-shared storage. Information about iSCSI target server and how to manage it can be found here: http://technet.microsoft.com/en-us/library/cc726015. Setting up vSphere vCenter is out of scope of this document. Please refer following article for best practices for running vSphere on NFS storage: http://www.vmware.com/resources/techresources/10096

For the purpose of configuration, we assume we have already setup the vSphere server with server version 5.0. Windows server cluster node is running Windows server 2012 with Server for NFS role installed. Windows server storage server is running Windows server 2012 with iSCSI target server role installed. Two cluster nodes construct a failover cluster with share storage provided by iSCSI target server.

Server for NFS Cluster Setup

Creating a cluster itself is beyond the scope of this tutorial. Please refer to blog post Creating a Windows Server 2012 Failover Cluster.

Let’s assume we already established the cluster (named nfsclusterap in our case), now pick one of cluster node (in our case zzqclustertest1) and open Failover Cluster Manager. Then connect to our cluster (Figure 2)


Figure 2

If this cluster is found, failover cluster manager will show the details of this cluster. We can use both UI and PowerShell Cmdlet to create a Server for NFS share for this cluster. Note that the share should be created under “Shares” folder of shared storage mount point of the cluster (In our case, E:\Shares\). To allow vSphere mount this share, make sure that following options have been chosen:

  • Allow root access
  • Enable unmapped access
  • Set permission of machine running Server for NFS role to readwrite

Details of how to create a share on Windows server 2012 Server for NFS cluster can be found from blog post “Server for Network File System First Share End-to-End” at http://blogs.technet.com/b/filecab/archive/2012/10/08/server-for-network-file-system-first-share-end-to-end.aspx. New share wizard UI must be triggered from clicking “Add File Share” button in roles view. (Figure 3)

Figure 3

We also need to add NFS client to the outgoing firewall exceptions on the ESX server to allow.

VMware vSphere Configuration

First go to your vSphere vCenter (in our case: https://172.30.182.110/) and download the “vSphere Client” (Figure 4). 


Figure 4

Once you download it and install, open up the application and connect to the vCenter through its IP address or host name (in our case: vSphereHost). User/Password authentication is required. (Figure 5) Use the ESX server credentials for User/Password authentication.


Figure 5

Now we are going to add our Server for NFS share set up in the previous step as a data store for vSphere. Just select “Configuration” tab from horizontal bar and click “Storage” on the vertical bar. (Figure 6) 


Figure 6

Now let’s create the data storage. Click “Add Storage…” button on the up right corner of the UI and select Network File System as storage type. (Figure 7)


Figure 7

After that, we choose the name of our Server for NFS server cluster and the share we created before (in our case, server cluster name is “nfsclusterap” and the share is “/share”). Then we pick a name for that storage and create it. (Figure 8)


Figure 8

Now we can create a virtual machine from vSphere “Create New Virtual Machine” wizard. More details of that can be found from vSphere manual from vSphere Document Center:

http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc_50%2FGUID-55238059-912E-411F-A0E9-A7A536972A91.html

 

Feedback

 

Please send feedback you might have to nfsfeed@microsoft.com

 


DFS Replication Improvements in Windows Server 2012

$
0
0

Hi folks, Ned Pyle here. As promised when I left AskDS and MS Support for greener pastures, I’m still in the blogging game – I told you I’d be back! Let’s start things off talking about improvements in Windows Server 2012 and DFS Replication (DFSR).

Windows Server 2012 DFSR focuses on reliability and supportability changes based on direct field and MS Support feedback. This release doesn’t contain many new features but is much easier to troubleshoot and is more resilient to environmental issues. In the end, that makes your life easier. And every IT department could use some easier…

clip_image002[4]
If this is your daily routine, we can help

I can only assume you already know DFSR from all of my old write-ups, so let’s dive into the details.

Unexpected shutdown worker progress

DFSR uses a per-volume ESE (aka “Jet”) database to track all file changes in replicated folders on their individual volumes. DFSR contains code to attempt graceful and dirty recovery of the database after an unexpected shutdown. Mallikarjun Chadalapaka has a great write-up on dirty shutdown recovery here.

Previous OS behavior

On detecting a dirty shutdown, DFSR begins a recovery process. This starts with logging event 2212:

Event ID=2212

Severity=Warning

The DFS Replication service has detected an unexpected shutdown on

volume %2. This can occur if the service terminated abnormally (due to

a power loss, for example) or an error occurred on the volume. The

service has automatically initiated a recovery process. The service

will rebuild the database if it determines it cannot reliably

recover. No user action is required.

Additional Information:

Volume: %2

GUID: %1

If the recovery is successful, DFSR logs event 2214:

Event ID=2214

Severity=Informational

The DFS Replication service successfully recovered from an unexpected

shutdown on volume %2.This can occur if the service terminated

abnormally (due to a power loss, for example) or an error occurred on

the volume. No user action is required.

Additional Information:

Volume: %2

GUID: %1

If the recovery is unsuccessful, DFSR logs event 2216:

Event ID=2216

Severity=Error

The DFS Replication service failed to recover from an unexpected

shutdown on volume %2. This can occur if the service terminated

abnormally (due to a power loss, for example) or an error occurred on

the volume. Recovery will be attempted periodically in %3 seconds. No

user action is required.

Additional Information:

Error: %4 (%5)

Volume: %2

Guid: %1

DFSR didn’t log how a recovery was progressing, though. This makes troubleshooting tricky and we found that sometimes customers would think the recovery had hung or halted, and they’d start trying to fix things (perhaps making things worse).

Windows Server 2012 behavior

Two new event log messages now appear that describe where the internal repair process stands. You now know that DFSR has moved past the detection phase and into the consistency checking and rebuilding phase.

Event ID=2218

Severity=Informational

Message=

The DFS Replication service is in the second step of replication database

consistency checks after an unexpected shutdown. The database will be

rebuilt if it cannot be recovered. No user action is required.

 

Additional Information:

Volume: %2

GUID: %1

 

Event ID=2220

Severity=Informational

Message=

The DFS Replication service is in the third step of replication database

consistency checks after an unexpected shutdown. Database recovery is

currently in progress. No user action is required.

 

Additional Information:

Volume: %2

GUID: %1

Just be patient – it will complete. If in doubt, contact Microsoft Support – don’t try to get out and push.

Performance registry defaults

DFSR contains registry overrides to control behaviors like the number of files to replicate simultaneously, stage simultaneously, etc.

Previous OS behavior

The default settings in Windows Server 2008 R2 were a bit too conservative. After release, we tested tweaked registry settings that resulted in roughly double the performance of default settings:

Windows Server 2012 behavior

These more aggressive settings are now the default in Windows Server 2012 (if not overridden in the registry by you):

  • AsyncIoMaxBufferSizeBytes
  • New default value: 8388608

  • RpcFileBufferSize
  • New default value: 524288   

  • StagingThreadCount
  • New default value: 8    

  • TotalCreditsMaxCount
  • New default value: 4096    

  • UpdateWorkerThreadCount
  • New default value: 32

The allowed ranges are unchanged except for UpdateWorkerThreadCount (see below).

UpdateWorkerThreadCount max

UpdateWorkerThreadCount controls the number of simultaneously inbound-replicating files to a DFSR server.

Previous OS behavior

The maximum configurable range in Windows Server 2008 R2 is 64. If you set the maximum allowed value for UpdateWorkerThreadCount to 64, it is possible to see intermittent DFSR service deadlocks. This manifests as a hung service, which for customers is nearly impossible to troubleshoot (you need a debugger and private symbols). Because the issue may not happen for days or weeks, there is no easy way to correlate cause and effect.

Windows Server 2012 behavior

The maximum value is now 63. Voila!

Read Only Domain Controller support for DFS Management

Administrators use the DFS Management snap-in (Dfsmgmt.msc) for all graphical configuration of DFSR.

Previous OS behavior

DFS Management was introduced in Windows Server 2003 Service Pack 1 introduced the, long before read-only domain controllers (RODCs). It expected all domain controllers to be writable when creating a replication group or any other AD objects. When DFS Management tries to write to an RODC, it fails with an access denied error. This issue has existed since Windows Server 2008, but since RODC usage was lower and RODCs tend to exist mainly in branch offices, we never saw it until much later. Now that RODCs are everywhere, well…

Windows Server 2012 behavior

DFS Management now requests only writable domain controllers when making DC queries.

Read-only disconnected topology detection

DFS Management contains a topology checking routine to alert administrators when they have created an incomplete (aka "disconnected") DFS replication topology. A disconnected topology prevents eventual replication of data, leading to divergence, user confusion, and potential data loss.

Previous OS behavior

A bridged topology of A <-> B <-> C is not flagged as disconnected when B is a read-only replicated folder. Because there is no outbound replication on a read-only member, any files created on A or C will not replicate further than B, so users on A and C will potentially see different versions of files, or no files at all.

clip_image002

Windows Server 2012 behavior

The topology checker code now understands the bridged read-only replicated folder scenario and appropriately warns you when detected.

4412 conflict event data

DFSR uses a series of conflict resolution algorithms to detect file collisions and appropriately handle a winning and losing file. DFSR notes these in a per-collision 4412 informational event log entry.

Previous OS behavior

The 4412 event did not contain quite enough information easily troubleshoot unexpected collisions. For example:

Message=

The DFS Replication service detected that a file was changed on multiple servers. A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the Conflict and Deleted folder.

Additional Information:

Original File Path: D:\Windows\SYSVOL\domain\Policies\{E75E8CC5-27B3-483F-AA79-FFF726236A0A}\Adm

New Name in Conflict Folder: Adm-{EE271589-88F7-4E8C-A057-013CF75B352B}-v294528

Replicated Folder Root: D:\Windows\SYSVOL\domain

File ID: {3351DB9B-9DAF-4273-90C1-FC347266BBD2}-v29180999

Replicated Folder Name: SYSVOL Share

Replicated Folder ID: 29578A90-233A-48B7-B8C3-1BB0A05873EC

Replication Group Name: Domain System Volume

Replication Group ID: 70AC3FC4-60FC-4D15-964D-AE0F96098E60

Member ID: C6D34675-591E-4FC9-B88E-06AFC659CAED

Windows Server 2012 behavior

The 4412 event message now contains an additional field of Partner Member ID that lists the winning server's identity.

Message=

The DFS Replication service detected that a file was changed on multiple servers. A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the Conflict and Deleted folder.

Additional Information:

Original File Path: D:\Windows\SYSVOL\domain\Policies\{E75E8CC5-27B3-483F-AA79-FFF726236A0A}\Adm

New Name in Conflict Folder: Adm-{EE271589-88F7-4E8C-A057-013CF75B352B}-v294528

Replicated Folder Root: D:\Windows\SYSVOL\domain

File ID: {3351DB9B-9DAF-4273-90C1-FC347266BBD2}-v29180999

Replicated Folder Name: SYSVOL Share

Replicated Folder ID: 29578A90-233A-48B7-B8C3-1BB0A05873EC

Replication Group Name: Domain System Volume

Replication Group ID: 70AC3FC4-60FC-4D15-964D-AE0F96098E60

Member ID: C6D34675-591E-4FC9-B88E-06AFC659CAED

Partner Member ID: 2716E4E2-ED01-4285-9137-FACB4EE84C4A

You can use DFSRDIAG GUID2NAME to translate that partner GUID into a human-friendly name. For example:

image
Aha! FSF-02 won.

Editions restrictions removed

There is no Windows Server 2012 Enterprise Edition; instead, you can purchase Windows Server 2012 Standard or Windows Server 2012 Datacenter, which is no longer an OEM-only SKU and exists to provide unlimited virtualization licenses.

Previous OS behavior

DFSR cross-file Remote Differential Compression (RDC) support ties to the server edition being Enterprise or Datacenter. DFSR Cluster support ties to Enterprise or Datacenter editions as well, through internal checks. Implicitly, DFSR cluster support requires enterprise and higher because the Failover Cluster features only exist on those editions.

Windows Server 2012 behavior

All edition checks are removed and Windows Server 2012 has full DFSR capabilities even in Windows Server 2012 Standard.

Initial sync to read-only replicated folders with preexisting data

Read-only (RO) replicated folders are always non-authoritative and do not allow local changes by use of an IO-blocking filter driver named dfsrro.sys. You are encouraged to pre-seed data before initial sync, meaning that data can already exist when DFSR is configured on two or more servers.

Previous OS behavior

Windows Server 2008 R2 SP1 introduced a regression (that we recently fixed) where initial sync from Read Write (RW) to RO does not overwrite file differences on the RO. This leads to data inconsistencies in the replication groups, as these differing files will never be right on RO servers unless they are later modified again on the RW. Which rather defeats the purpose of pre-seeding.

Windows Server 2012 behavior

This is fixed. :)

DC port 5722

DFSR uses TCP/IP and RPC to replicate files, and we finally fixed an old scenario where domain controllers differed in port usage from member servers.

Previous OS behavior

In Windows Server 2008 and Windows Server 2008 R2, a domain controller replicating SYSVOL and/or custom replicated folders with DFSR used TCP port 5722. This was due to a bug I discussed back on AskDS.

Windows Server 2012 behavior

This is also fixed. Now DCs will operate consistently like member servers, listening on a dynamic port in the 49152 – 65535 range unless you choose to hard code a port. If you have gotten used to 5722 and reaaaaally like using hard-coded ports, you can return to the old behavior with command:

Dfsrdiag.exe staticrpc /port:5722

I doubt the person who takes over your job someday will thank you for it though…

Fixed missing DFSR migration event 6806

When using DFSRMIG.EXE to migrate your SYSVOL from using FRS to DFSR, event log entries tell you how things are proceeding and if there are any problems you need to investigate before moving to the next phase.

Previous OS behavior

In Windows Server 2008 R2, a timing issue could give you an expected warning 6804 with the rather scary message:

The DFS Replication service has detected that no connections are configured for replication group Domain System Volume. No data is being replicated for this replication group.

Once AD replication and the migration caught up, we should have logged a 6806 event saying everything was fine. But we forgot to. Errp.

Windows Server 2012 behavior

Now we log that missing 6806 event letting you know that all is well and migration is working.

Replicated folder removal and replication

Replicated folders are the base of replication and the top level of a content set in DFSR database terms.

Previous OS behavior

In Windows Server 2008 R2, removing a replicated folder stopped replication of all other RFs until the removal completed.

Windows Server 2012 behavior

Now you can remove a replicated folder (thereby causing DFSR to update its DFSR database and stop replicating that content set) and not see other replicated folders pause replication. This keeps a hub server working efficiently when you decide to decommission a branch node. Faster also implicitly means increased reliability, as we are not spending large amounts of time with replication halted.

Staging messaging

Windows Server 2008 R2 SP1 introduced a little-known hotfix to update the Dfsmgmt.msc wizards for new replication groups and new replication wizards. This provides further guidance around configuring the staging folder quota to prevent performance bottlenecks.

image

This capability is now native to Windows Server 2012.

Added support for Dedup, FCI, and DAC file modifications

Data Deduplication support

We modified the DFSR allowed reparse point replication rules to support replicating the new IO_REPARSE_TAG_DEDUP tag. This type of reparse point tag is part of the new file deduplication system. This isn’t truly reparse point replication; file is “rehydrated” and replicated as a normal file then put back into its dedup’ed state on the downstream. Slick.

File Classification Infrastructure support

We modified File Classification Infrastructure (FCI) to prevent re-writing unchanged data to the alternate data stream on files during classification passes. This previously caused replication storms in Windows Server 2008 R2.

Dynamic Access Control Support

Changes made to APIs used to access new NTFS data structures for auditing and conditional ACE security required updates to DFSR in Windows Server 2012. Because Windows Server 2008 R2 and older operating systems do not implement these APIs though (and therefore cannot use or display these ACLs) they did not require changes. Therefore, there is no back port required to configure replication between a Windows Server 2008 R2 and Windows Server 2012 replicated folder.

But!

Microsoft strongly discourages mixed Windows Server 2012 and legacy operating system DFSR.

There are significant NTFS security data differences between Windows Server 2012 and earlier operating systems, often to facilitate Dynamic Access Control features. Moreover, any claims-based access configuration will not work consistently in a design that allows users to connect to Windows Server 2008 R2 and Windows Server 2012 versions of a replicated file; one server might grant more or less access than the other.

For example, if someone modifies the security of a file on a Win2008 R2 server, DFSR packages that up with the file (this is called “marshalling”) and sends it along as-is. When a user attempted to access the file on the Win2012 server, the Claims-based security elements would no longer exist, and the user would be denied access. More troubling, if you were letting users access the data from multiple DFSN-provided shares, they would be calling you with the infamous “it sometimes works and sometimes fails” symptom that drives IT pros batty.

However!

Central Access Policies modify individual files and folders to contain a special SID in the tail of the SACL structure when adding the CAP rules the first time. This means that first applying a CAP triggers replication of all folders and files replicated under the auspices of the CAP structure, just like it would with any other security change to the classic DACL.

Subsequent changes to the rules of an already-added CAP do not alter the files, however – this is the beauty of Central Access Policy. This means that once replication completes, you can change the security on files without triggering further replication. This is a seriously cool feature if you are a DFSR administrator, and it means once you deploy CAP, further security changes to an existing policy are completely non-intrusive to replication!

Ideally, configure CAP and File Classification Infrastructure on the file structure before configuring DFSR; that way you only pay the replication price once during DFSR initial sync. And to reiterate, use Windows Server 2012 on all nodes before deploying DAC. If you need help migrating existing DFSR environments, I recommend this series.

ReFS

DFSR does not support ReFS volumes, as this new file system removes many critical data types used or supported by DFSR, such as streams, sparse files, compressed files, 8.3 names, extended attributes, etc.

DFSR does not allow you to replicate ReFS volumes. The service checks to make sure you are using NTFS and it will fail, gracefully. You cannot replicate a volume with ReFS locally; the DFSR service will not allow it.

Dfsmgmt.msc prevents an administrator from accidentally configuring a ReFS volume. Even if you pre-create the folder and use DFSRADMIN to bypass the check, DFSR prevents replication with event 6404, ("The local path is not the fully qualified path name of an existing, accessible local folder."). The debug log will show error 9225 ("volume was not found")

clip_image002[6]
No ReFS allowed!

CSV

Just like Windows Server 2008 R2, DFSR in Windows Server 2012 does not support Cluster Shared Volumes (CSV).

Autorecovery Disabled

Just like Windows Server 2008 R2, DFSR in Windows Server 2012 includes the database autorecovery change:

  • KB 2663685 - Changes that are not replicated to a downstream server are lost on the upstream server after an automatic recovery process occurs in a DFS Replication environment in Windows Server 2008 R2 - http://support.microsoft.com/kb/2663685

Complex nested folder creation-deletion-replication fix

Just like Windows Server 2008 R2, DFSR in Windows Server 2012 includes the latest reliability changes for handling complex nested file and folder creation and deletion on partner nodes:

  • KB 2450944 - Some folders or files are unexpectedly deleted on the upstream server after you restart the DFS Replication service in Windows Server 2003 R2, in Windows Server 2008 or in Windows Server 2008 R2 - http://support.microsoft.com/kb/2450944

File creation conflict algorithm

Windows Server 2012 changes the only disparate file conflict resolution previously algorithm used from first creator wins to last creator wins, in order to be more consistent. For more information about this topic, see this article.

Keep alive support added for huge files

Windows Server 2012 now correctly allows very large (many many GB) files to complete computation of RDC signatures before the RPC server connection times out. In prior OSes the file would never replicate due to timing constraints. This mainly happened with files that were hundreds of GB.

But!

64GB files are still the supported maximum. So this is us being nice and helping you in a scenario that is technically, still unsupported.

As a final note: I didn’t include all the fixes released as updates to Windows Server 2008 R2 that are also part of Windows Server 2012, just the more interesting ones. So as a rule of thumb, if you got a hotfix for Win2008 R2 before Win2012 RTM’ed, the latter has the update built-in.

And that’s it. Nice, eh?

- Ned “it’s all good” Pyle

Server for NFS Diagnostics

$
0
0

In this post, we will discuss the instrumentation available in Server for NFS in Windows Server 2012 and how it can be used to detect and diagnose any deployment and operational issues.

Event Viewer

There are quite few changes in the Server for NFS event model for Windows Server 2012. In the previous releases of Windows Server, Server for NFS logged events in to the System channel. In Windows Server 2012, Server for NFS logs the events into its own channel. The event IDs are unchanged; however event channels and provider GUIDs are different. The following figure displays the layout of the event channel, where Server for NFS logs events.



Activity Logging

Server for NFS logs the events for some of the NFS operations into Operational channel which includes:

  • Read and Write
  • Lock and Unlock
  • Mount and Unmount
  • Create and Delete

The activity logging can be enabled using the PowerShell cmdlet Set-NfsServerConfiguration.

For example, the following command enables the activity logging for mount, read, and write operations.


PS C:\> Set-NfsServerConfiguration –LogActivity mount,read,write
The activity logging can also be enabled through the Services for Network File System management snap-in. Follow these steps to enable Activity Logging in Server for NFS.
  1. Open Server Manager and then click Services for Network File System (NFS) from the Tools menu.
  2. In Services for Network File System, right-click on Server for NFS and select Properties.
  3. Switch to Activity Logging tab and select the activities you want to be logged.

 Identity Mapping Events

Server for NFS logs identity mapping related events into the IdentityMapping channel. The following are some of the critical events to watch for when local file based identity mapping is configured as the identity mapping source.

Event ID

Level

Message

Resolution

4025

Error

A duplicate ID value <UID/GID number> was found when loading <FileName>. The file will not be used as a mapping source

Server for NFS performs some validations against the passwd and group files. This event is logged if multiple user accounts in the passwd file have the same user identifier (UID) or multiple group accounts in the group file have the same group identifier (GID). To resolve this issue, edit the passwd/group files to change the UID/GID on the conflicting user/group account having this issue. Use Get-NfsMappedIdentity PowerShell cmdlet to retrieve the list of users/groups having the UID/GID mentioned in the event.

4026

Error

A duplicate name <AccountName> was found when loading <FileName>. The file will not be used as a mapping source

Edit the file specified in the event and remove the duplicate account name.

4027

Error

A syntax error was found on line <LineNumber> when loading <FileName>. The file will not be used as a mapping source

The passwd/group file specified in the event is not following the correct syntax required by Server for NFS. Edit the file and check for any errors at the line number mentioned.

4029

Warning

Mapping source update detected. File <FileName> not found

Server for NFS looks for the passwd and group files at following location,  %windir%\system32\drivers\etc. Make sure that the files are present at this location and NfsService has permission to read these files.[MCJ1] 

4030

Error

<FileName> has no data. The file will not be used as a mapping source.

The passwd/group file is empty. Make sure that you have right files stored at location %windir%\system32\drivers\etc or remove the files from this location if it was not intended to use mapping files as identity mapping source.

4032

Error

<FileName>. Memory allocation failed when processing the file. It will not be used as a mapping source.

The system is overloaded and there is not enough memory available to process the request. Close some of the applications that are not required to free the memory.

4033

Error

<FileName>. Failed to process the file. The file will not be used as a mapping source.

Unexpected error encountered while opening the file specified in the event. Check the file for correct syntax.

 

The following are some of the critical events when the identity mapping store is Active Directory, Active Directory Lightweight Directory Services or other RFC2307 compliant LDAP store.

Event ID

Level

Message

Resolution

4012

Error

Active Directory Domain Services(R) contains multiple users which match attribute <AttributeName>.  Only one Windows(R) user should be associated with each UNIX UID. With multiple Windows users associated with one UNIX UID, Server for NFS cannot determine which Windows user is requesting access to files.  No Windows users associated with the same UNIX UID will be able to access files through Server for NFS. Try removing the duplicate UNIX UID entries.

Event 4012 indicates that the configured identity mapping store contains multiple user accounts that have an identical value for attribute uidNumber (the value is given in the event message text).

 Run the following PowerShell command to find out the user accounts having identical value for attribute uidNumber.

 Get-NfsMappedIdentity –AccountType user –Uid <UIdNumber>

 Then correct the value of the uidNumber attribute of the user accounts using the following PowerShell command.

 Set-NfsMappedIdentity –UserName <sAMAccountName> -UId <UidNumber>

4013

Error

Active Directory Domain Services(R) contains multiple groups which match attribute <AttributeName>.  Only one Windows(R) group should be associated with each UNIX GID. With multiple Windows groups associated with one UNIX GID, Server for NFS cannot determine which Windows group to use to grant access to files.

 Try removing the duplicate UNIX GID entries.

Event 4013 indicates that the configured identity mapping store contains multiple group accounts that have an identical value for attribute gidNumber (the value is given in the event message text).

 Run following PowerShell command to find out the group accounts having identical value for attribute gidNumber.

 Get-NfsMappedIdentity –AccountType group –Gid <GIdNumber>

 Use following PowerShell command to correct the value of the gidNumber attribute of the group account.

 Set-NfsMappedIdentity –GroupName <sAMAccountName> -GId <GidNumber>

4014

Error

Active Directory Domain Services(R) contains multiple users which match attribute <AttributeName>.  Only one Windows(R) user should be associated with each sAMAccountName. With multiple Windows users associated with one sAMAccountName, Server for NFS cannot determine which Windows user is requesting access to files.  No Windows users associated with the same sAMAccountName will be able to access files through Server for NFS. Try removing the duplicate sAMAccountName entries.

Event 4014 indicates that the configured identity mapping store contains multiple users that have an identical value for attribute sAMAccountName (the value is given in the event message text).

 Try removing the duplicate user accounts having identical sAMAccountName.

4015

Error

Active Directory Domain Services(R) contains multiple groups which match attribute <AttributeName>.  Only one Windows(R) group should be associated with each sAMAccountName. With multiple Windows groups associated with one sAMAccountName, Server for NFS cannot determine which Windows group to use to grant access to files. Try removing the duplicate sAMAccountName entries.

Event 4015 indicates that the configured identity mapping store contains multiple groups that have an identical value for attribute sAMAccountName (the value is given in the event message text).

 Try removing the duplicate group accounts having identical sAMAccountName.

4016

Error

Server for NFS could not connect to the Lightweight Directory Access Protocol (LDAP) server for domain <DomainName>. Without a connection to the LDAP server, Server for NFS cannot query for Windows-to-UNIX user account mappings and cannot grant file access to any user. Verify that Server for NFS is configured to use the appropriate LDAP server using the Nfsadmin command-line tool.

Event 4016 indicates that Server for NFS is not configured to use either Active Directory Domain Services (AD DS) or any other LDAP store or User Name Mapping as a Windows-UNIX identity mapping source.

 Use Set-NfsMappingStore PowerShell cmdlet to set the identity mapping store for the Server for NFS.

4017

Error

Server for NFS could not find any Lightweight Directory Access Protocol (LDAP) accounts which match attribute <AttributeName>. Without attribute <AttributeName>, Server for NFS does not know the corresponding Windows user account for the Unix user and cannot grant file access to the UNIX user.%n%n Verify that the LDAP server is configured with the appropriate attributes.

Event 4017 indicates that Server for NFS could not find any Lightweight Directory Access Protocol (LDAP) accounts that match the attribute specified in the event message text.

 Add the necessary account information to the LDAP store by using New-NfsMappedIdentity or set-NfsMappedIdentity cmdlet. Then use Resolve-NfsMappedIdentity cmdlet to verify that Server for NFS is able to resolve the user account using the attribute specified in the event text.

 

The following are some of the critical events when the identity mapping store User Name Mapping (UNM) server.

Event ID

Level

Message

Resolution

1005

Error

Server for NFS could not obtain mapping information from User Name Mapping.  Server for NFS will make another attempt after <Duration> minutes. Without any mapping information, Server for NFS will not be able grant file access to users. Verify the User Name Mapping service is started on the User Name Mapping server, and User Name Mapping ports are open on firewalls.

Event 1005 indicates that Server for NFS cannot obtain mapping information from User Name Mapping (UNM) server. Incorrect settings in User Name Mapping source could cause this. Use Set-NfsMappingStore PowerShell cmdlet to configure User Name Mapping server. Get-NfsMappingStore cmdlet can be used to retrieve the current configuration. Use Resolve-NfsMappedIdentity cmdlet to verify that Server for NFS can obtain the mapping information from UNM server.

1006

Error

Server for NFS is not configured for either Active Directory Lookup or User Name Mapping. Without either Active Directory Lookup or User Name Mapping configured for the server, or Unmapped UNIX User Access configured on all shares, Server for NFS cannot grant file access to users. Configure Server for NFS for either Active Directory Lookup or User Name Mapping using the Nfsadmin command-line tool, or Unmapped UNIX User Access using the Nfsshare command-line tool.

Event 1006 indicates that Server for NFS is not configured for either Active Directory Lookup or User Name Mapping.

If you have configured shares on the Server for NFS to use ‘Unmapped UNIX User Access’ mode, you may ignore this event.  Otherwise, to solve this problem, configure Server for NFS to use an identity mapping source using Set-NfsMappingStore PowerShell cmdlet. To verify that the mapping store is configured correctly, use Get-NfsMappingStore cmdlet.

1056

Error

Server for NFS could not obtain updated mapping information from User Name Mapping.  Server for NFS will continue to use the mapping information it has and make another attempt after <Duration> minutes. If this problem persists, Server for NFS mapping information may become significantly out-of-date and may not be able grant file access to users. Verify that the User Name Mapping service is started either locally or on the remote server, and that User Name Mapping ports are open on firewalls.

Event 1056 indicates that Server for NFS cannot obtain mapping information from User Name Mapping (UNM) server. Incorrect settings in User Name Mapping source could cause this. Use Set-NfsMappingStore PowerShell cmdlet to configure User Name Mapping server. Get-NfsMappingStore cmdlet can be used to retrieve the current mapping store configuration. Use Resolve-NfsMappedIdentity cmdlet to verify that Server for NFS can obtain the mapping information from UNM server.

 

Admin Channel Events

The Server for NFS logs critical events that need admin’s intervention into Admin channel. Following are some of the critical events and recommended resolution steps.

Event ID

Level

Message

Resolution

1059

Error

Server for NFS could not register with RPC Port Mapper on all requested port/protocol combinations.  Server for NFS will attempt to continue but some NFS clients may not function properly. Network File System (NFS) clients discover NFS servers by querying the port mapper for a remote server (also known as Portmap and Rpcbind).  NFS clients may not be able to discover and communicate with Server for NFS on this computer.

These events indicate that other programs might be using some of the TCP/IP ports that are required by Server for.

 Determine if Server for NFS has registered all protocols

 To determine the ports and transports that Server for NFS uses, at an elevated command prompt on the affected server, type rpcinfo.

 Server for NFS registers on port 2049 for udp, tcp, udp6, tcp6

 Make this TCP/IP port available and restart Server for NFS.

 To make TCP/IP port 2049 available and restart Server for NFS, use the following procedure:

 1. At an elevated command prompt, type “netstat -a -b –o” to display all connections with their associated executables and processes.

 2. Resolve port allocations conflicting with the NFS ports identified in Step 1 by stopping conflicting services, or programs.

 3. Type “nfsadmin server stop”.

 4. Type “nfsadmin server start”.

 

1060

Error

Server for NFS could not register the Network File System (NFS) protocol on the specified port (%5). Status: %6.  Server for NFS is will attempt to continue.  At least one successful NFS port registration is required to start Server for NFS but some NFS clients may not function properly without this specific port registration. Verify that no other programs have registered with RPC Port Mapper with the following parameters. Program Name:%1 Program Number%2 Version:%3 Protocol: %4 Port:%5

1064

Warning

Server for NFS cannot initialize the volume with drive letter %1 for sharing. Network File System (NFS) shared resources on the volume will not be available to NFS clients. Windows(R) may be low on system resources.  Try increasing available system resources by closing programs, then restart Server for NFS manually.

Event 1064 indicates that Server for NFS cannot provision the volume for sharing; therefore, shared resources on the volume will not be available to NFS clients. The likely cause is that the computer is short of resources.

 To resolve this issue increase available system resources using the following procedure:

 1. Close all programs and stop unnecessary services on the affected server.

 2. At an elevated PowerShell prompt , type “nfsadmin server stop”.

 3. Type “nfsadmin server start”.

 To verify Server for NFS is sharing files, use the following procedure:

 1. On the affected server, type Get-NfsShare.

 2. Verify that the list of shared resources is correct.

1069

Error

Server for NFS could not establish a connection with configured NIS server

Event 1069 indicates that Server for NFS is unable to access the Network Information Service (NIS) store where the netgroup configuration is stored. The most likely causes are:

 • NFS server is not configured appropriately to access NIS based netgroups.

 • There is a network connectivity issue between the Server for NFS and the NIS server.

 If Server for NFS is unable to access the netgroup store, determine if the location of the NIS NetGroup Source is accurate by using the following procedure:

 1. At PowerShell prompt on the affected server, type Get-NfsNetgroupStore.

 2. Verify that the NISDomain, and NISServer  are configured correctly.

 3. Verify that network connectivity exists between the Server for NFS and the NIS server where netgroups are configured as follows:

 • Use the rpcinfo.exe tool to verify that the NIS server is accessible over the network. To check if the source computer is accessible and the NIS service is registered on the source computer, type the following command, where <computername> is the name of the NIS server: rpcinfo <computername>. 

• The NIS service should appear in the output of this command as RPC program number 100004 and protocol version 2.

 Verify Server for NFS is configured appropriately to access NIS server

 Verify that Server for NFS is correctly configured to access the NIS server as follows:

 1. In PowerShell window, run Get-NfsServerConfiguration cmdlet.

 2. Verify that Protocol for NIS is UDP, TCP, or both, and is compatible with the protocol allowed at the NIS source computer as determined from the output of the command rpcinfo.exe <computername>.

 To verify that issue is resolved, use Get-NfsNetGroup cmdlet. You should be able to retrieve the netgroups from the netgroup store.

1071

Warning

Server for NFS was unable to obtain security information for the configured UnmappedUnixUserUsername user account %1. Check that the user account %1 is valid and meets all configured security policies. There may be additional information in the Windows Security event log. Server for NFS will attempt to revert to the default anonymous account. MSV Status: %2, SubStatus: %3S4U Status: %4, SubStatus: %5

Event 1071 indicates that Server for NFS was unable to obtain a logon token for the account used to process anonymous logons or for UNIX UIDs that do not have an explicit mapping. The event message details the account that led to the problem report. Ensure that the account is valid and can be used to perform a successful logon.

 

 

1072

Warning

Server for NFS was unable to obtain security information for the GSS user account %1. Check that the user account %1 is valid and meets all configured security policies. There may be additional information in the Windows Security event log. MSV Status: %2, SubStatus: %3 S4U Status: %4, SubStatus: %5

Event 1072 indicates Server for NFS was unable to obtain a logon token for the account used to access the NFS server when using an RPCSEC_GSS based identity. The event message details the account that led to the problem report. Ensure that the account is valid and can be used to perform a successful logon.

1073

Warning

Server for NFS was unable to obtain or refresh security information for the user account %1. Check that the user account %1 is valid and meets all configured security policies. There may be additional information in the Windows Security event log.%n%nMSV Status: %2, SubStatus: %3%nS4U Status: %4, SubStatus: %5

Event 1073 indicates Server for NFS was unable to refresh an access token. The event message details the account that led to the problem report. Ensure that the account is valid and can be used to perform a successful logon.

 

4021

Error

The Server for NFS was unable to begin monitoring of NFS related cluster events (%1). The Server for NFS will continue in a non-clustered mode.

These events indicate that either the Cluster Service is not running or the computer is low on resources.

 Determine if the Cluster Service is running as follows:

 1. At command prompt on the affected server, type services.msc.

 2. Check if Cluster Service is running.

 

4022

Error

The Server for NFS thread monitoring NFS related cluster events ended unexpectedly (%1). The Server for NFS will continue in a non-clustered mode.

4023

Warning

Server for NFS encountered an error condition when checking for the presence of Failover Clustering (%1) and will continue to operate but in a non-clustered configuration only. To re-detect Failover Clustering and allow Server for NFS to operate in a clustered configuration, Server for NFS should be restarted using either the Services for Network File System (NFS) administrative tool or nfsadmin server stop and nfsadmin server start

 

Performance Counters

 Server for NFS-NFSv4 Statistics

This performance counter set includes performance counters related to compound requests processed by Server for NFS. It also includes a performance counter indicating the count of virtual servers hosted by Server for NFS.

Name

Description

Total Compound Requests

Total number of compound requests processed by Server for NFS since startup

Successful Compound Responses

Total number of compound requests succeeded since Server for NFS started

Failed Compound Responses

Total number of compound requests failed since Server for NFS started

Total Virtual Servers

Current count of virtual servers hosted by the Server for NFS. This counter is incremented when virtual server is successfully started and decremented on virtual server stop. This counter will be set to one in non-cluster case. In case of cluster, there will be one instance of virtual server per Server for NFS resource.

 

Server for NFS-Netgroup

Name

Description

Failures Communicating With NIS

Number of time the Server for NFS failed to communicate with the NIS server.

 

Server for NFS-User Mapping

LDAP refers to Active Directory, Active Directory Lightweight Directory Services or any other RFC 2307-based LDAP Store. UNM Server refers to User Name Mapping server.

Name

Description

Total LDAP Requests

Number of LDAP query requests made by the Server for NFS since startup.

Total LDAP successes

Count of LDAP lookup requests which resulted in successful UID/GID to account name or account name to UID/GID lookup.

Total LDAP Failures

Count of LDAP lookup requests which failed to retrieve the identity mapping information from LDAP store.

Total LDAP Requests Per Second

Number of LDAP lookup requests performed  per second by the Server for NFS.

Total UNMP Requests

Number of user name mapping lookup requests performed by the Server for NFS since startup.

Total UNMP Failures

Count of user name mapping lookup request issued by the Server for NFS which resulted in failure. The failure reason could be anything like mapping does not exist or communication failure with the UNM server.

Total UNMP Successes

Count of mapping lookup request made against UNM Server which resulted in successful mapping information.

Total UNMP Requests Per Second

Count of UNMP mapping lookup requests issued by the Server for NFS per second.

Average LDAP Lookup Latency

Average amount of time taken by Server for NFS to resolve UID/GID to account name from the LDAP mapping store and vice versa. It is the total time spent doing the lookup in the LDAP mapping store divided by the total number of mapping lookup requests made to the LDAP mapping store.

Maximum LDAP Lookup Latency

Maximum amount of time taken by Server for NFS to resolve the identity mapping in the LDAP mapping store.

Average UNMP Lookup Latency

Average amount of time taken by Server for NFS to resolve the UID/GID to account name from UNMP mapping store and vice versa. It is basically the total time spent doing the lookup in the UNMP mapping store divided by the total number of mapping lookup requests made by the server to the UNMP mapping store.

Maximum UNMP Lookup Latency

Maximum amount of time spent by Server for NFS to resolve the identity mapping from the UNMP mapping store.

 

Server for NFS-NFSv4 Read Write Statistics

Name

Description

Total cached MDL Reads

Number of times the read operation is performed using a memory descriptor list (MDL) from the system cache manager.

Total Fast IO Reads

Number of times the read operation is performed using buffered IO from the system cache manager.

Total Unstable Writes

Count of NFS unstable writes performed by Server for NFS.

Average Fast IO Read Latency

Average time taken by Server for NFS to perform read operation using buffered IO from the system cache manager. It is the total time taken by the server performing all buffered IO reads divided by the number of buffered IO reads performed so far.

Average Non Fast IO Read Latency

Average amount of time taken by Server for NFS to perform read operations using IRP based IO.

 

Server for NFS-NFSv4 Request/Response Sizes

Name

Description

Maximum Size of NTFS Reads

Maximum size in bytes of the read request performed by Server for NFS.

Minimum Size of NTFS Reads

Minimum size in bytes of the read request performed by Server for NFS.

Maximum Size of NTFS Writes

Maximum size in bytes of the write request performed by Server for NFS.

Minimum Size of NTFS Writes

Minimum size in bytes of the write request performed by Server for NFS.

Maximum Compound Request Size

Maximum size in bytes of the NFS compound request.

Average Compound Request Size

Average size in bytes of the NFS compound request.

Maximum Compound Reply Size

Maximum size in bytes of the NFS compound reply.

Average Compound Reply Size

Average size in bytes of the NFS compound reply.

Maximum Compound Operations in Request

Maximum number of operations in a single NFS compound request.

Average Compound Operations in Request

Average number of operations in NFS compound request.

 

Server for NFS-NFSv4 Throughput

Name

Description

NFS Compounds Processed/Sec

Number of NFS compound requests processed per second.

 

Server for NFS-NFSv4 Operation Statistics

Server for NFS- NFSv4 Operation Statistics performance counter set is reported for each compound operation. There is one instance of the following of performance counters for each compound operation in the Server for NFS-NFS v4 implementation.

Name

Description

Count Of Operations Processed

Count of this NFS4 compound operations processed by the Server for NFS so far.

% Operations At Dispatch

This counter is not used in the current release.

Average Number of Times Operation requeued

Average Number of times this compound operation was re-queued for processing by the worker thread.

Reply Packet Not Cached Count

Number of times reply packet was not cached when requested by the client.

Average latency

Average amount of time taken by the server to execute this compound operation. This includes time taken to decoding the request and executing the operation.

 

Server for NFS -Session and Connection Statistics

Name

Description

Active Sessions Count

Number of active sessions with Server for NFS.

Active Connections Count

Number of active connections with Server for NFS.

Total Bad Session Requests

Number of session requests (OP_CREATE_SESSION) received by the Server for NFS so far with invalid arguments to operation.

KRB5 RPCSEC_GSS Requests Count

Number of requests received by Server for NFS with krb5 RPCSEC_GSS authentication.

KRB5I RPCSEC_GSS Requests Count

Number of requests received by Server for NFS with krb5i RPCSEC_GSS authentication.

KRB5P RPCSEC_GSS Requests Count

Number of requests received by Server for NFS with krb5p RPCSEC_GSS authentication.

AUTH_NONE Requests Count

Number of requests received by Server for NFS with AUTH_NONE authentication.

AUTH_UNIX Requests Count

Number of requests received by Server for NFS with AUTH_UNIX authentication.

Client With Sessions

Current count of clients that have session established with Server for NFS.

Total Client With Sessions

Number of clients that have created a session to Server for NFS since startup.

Number of times admin forcefully closed a session

Number of sessions force-closed by an administrator (Disconnect-NfsSession cmdlet).

Number Of Times Admin Revoked State

Number of open/lock states force-closed by an administrator (Revoke-NfsOpenFile / Revoke-NfsClientLock cmdlets).

Lease Expiry Revoke State Count

Number of open/lock states revoked by Server for NFs due  to the lease expiry.

Client Sessions Using Back Channel

Number of client sessions using back channel.

Clients Requesting SP4_MACH State Protection

Number of clients requesting SP4_MACH state protection.

Clients Requesting SP4_NONE State Protection

Number of clients requesting SP4_NONE state protection.

Clients Requesting SP4_SSV State Protection

Number of clients requesting SP4_SSV state protection.

Clients Requesting Bind Principal To State

Number of clients requesting bind principal to state.

Clients Requesting Persistent Session

Number of clients requesting a persistent session.

Number of clients requesting READW_LT

Count of requests for READW_LT.

Number of clients requesting WRITEW_LT

Count of requests for WRITEW_LT.

Special Anonymous State ID Use Count

Count of requests for special anonymous state ID.

Special Read Bypass State ID Use Count

Count of requests for special read bypass state ID.

Special Current State ID Use Count

Count of requests for special current state ID.

 

Using the Storage Pools page in Server Manager to create storage spaces

$
0
0

With Windows Server 2012, Server manager includes simple and easy to use, user interface that enables management of WindowsStorage Spaces and other Storage Subsystems such as EMC Clariion VNX and Dell EqualLogic (but not limited to these) storage arrays with supporting SMI-S and SMP based Storage provider respectively. By using Server Manager’s File and Storage Services Role user interfaces, you can create and manage storage objects such as storage pools and virtual disks.

In the following post, we will cover common storage spaces administrative tasks by using the “File and Storage Services” pages on Server Manager. For more information on the Windows Storage Spaces technology please read:

Storage Pools page in Server Manager:

  • The Storage Pools page as seen in the image (1.2) below, displays the “Available Disks” pool and the concrete pools.
    • The “Available Disks” pool is a subset of the physical disks from the primordial pool that you can use to create a new concrete pool or add to the existing concrete pool. In simple terms, physical disks showing in the Storage Pools page for the "Available Disks" are the physical disks with a “CanPool” property value as true.
    • Different storage subsystems mark the physical disk “CanPool” property as true or false based on the storage vendor criteria. For Windows Storage Spaces subsystem,the “Can Pool” property  will only be set to true if the disk has at least 4GB of contiguous unallocated space**.
    • To get more details and property values for each physical disk, open up an elevated PowerShell console and run the following command “Get-PhysicalDisk | fl *”.
    • To specifically see physical disks which are not showing with the "Available Disks" pool, open up an elevated PowerShell console and run the command “Get-physicaldisk –CanPool $false”.

(**Some other disks configurations such as using dynamic disks or disk associated to a windows server cluster storage will also mark the physical disk as not appropriate to be added to a storage pool.)

  • Once you select a specific storage pool in the primary tile, the associated virtual disks, physical disks of the selected storage pool will show up in the Physical Disks, and Virtual Disks related tiles.
  • You must refresh the Storage Pools page to see the updates to the subsystem that occurred outside of the Server Manager UI. (Refer to section 1.3 below for refresh button)
  • Management of physical disks through Server Manager relies on the uniqueness of the UniqueID property of each physical disk. While most disks and storage controller manufacturers comply with this requirement, some storage controllers are setting the same ID to multiple disks. In the case where there are more than one physical disks having same value for the UniqueID property, only one physical disk out of the set will be shown. In order to identify if the storage controller is setting unique values for physical disk IDs, you can open up an elevated PowerShell console and run the following command Get-PhysicalDisk | ft FriendlyName, UniqueId and verify whether the physical disks IDs are unique or duplicated.

1.      Navigating to the Storage Pools page in Server Manager

1.1.       Launch Server Manager and navigate to the “File and Storage Services” page.

 1.2.       Navigate to the Storage Pools page.

 1.3.       Refresh the UI by clicking on the Refresh button.

 

  Operations

 The following sections describe the common storage management actions of creating a storage pool and a virtual disk on it using Server Manager.

 1.      Create storage pool

 1.1    Logon as a user with admin privileges to your server, launch Server Manager, and then navigate to the “Storage Pools” page within the File and Storage Services Role.

 1.1.1          Right-click the “Available Disks” pool for the Storage Spaces subsystem and launch the New Storage Pool Wizard.

 

 OR

 Launch the New Storage Pool Wizard from the TASKS list.

 

 1.1.2          In the New Storage Pool Wizard, enter desired pool name and optional description. Make sure that you have selected the Primordial pool for the Storage Spaces subsystem.

 

 1.1.3          Select the number of disks needed for pool creation. If you want to designate a physical disk as a hot spare, then select the “Hot Spare” allocation type.

 

 1.1.4          Confirm the selected settings and initiate pool creation by selecting “Create” on the “Confirm selections” page.

 

  2.      Create virtual disk (storage space)

 2.1.    Right-click the concrete pool that you just created (the pool where type value is Storage Pool), and then launch the New Virtual Disk Wizard.

 

 2.1.1.  In the New Virtual Disk Wizard, make sure that you have selected the appropriate pool. Enter the desired virtual disk name and optional description.

 

 2.1.2.  Select the desired storage layout and provisioning scheme as per your storage requirements.

 2.1.3.  On the “Specify the size of the virtual disk” page, enter the desired size for the new virtual disk or pick the “Maximum size” option. 

  • If you pick the “Maximum size” option, the system will try to create the largest size possible for the virtual disk.
  • If you select the check box for “Create the largest virtual disk possible, up to the specified size” while specifying the size then the system will try to create the largest size possible for the virtual disk up to the requested size.
  • It is also important to note that the value showing up as the storage pool free space (in our example 43.8GB) shows the actual free allocation the pool has overall. For cases with a fixed provisioning of a non-simple storage layout such as Mirror or Parity, when defining the size of the virtual disk, you have to take into account the overhead of storage needed to create the extra copies of the virtual disks extents for resiliency. As a basic example, with the 43.8GB free space in the pool, creating a 30GB mirrored virtual disk is not possible since it will take at least 60GB of free space in the pool to create a mirrored virtual disk to hold the two copies of the mirrored data.

 

 2.1.4.  Confirm the settings and initiate virtual disk creation by selecting “Create” on the “Confirm selections” page.

 

 Some Additional Storage Management Related References and Information:

 1)      Storage Space creation using PowerShell cmdlet:

 Below are a couple of PowerShell cmdlet examples that demonstrate the same provisioning action as described above using Server Manager. 

  • Create a storage pool using all the available physical disks. More information on pool creation using PowerShell cmdlet is here.

$PhysicalDisks = Get-StorageSubSystem -FriendlyName "Storage Spaces*" | Get-PhysicalDisk -CanPool $True; 

New-StoragePool -FriendlyName CompanyData -StorageSubsystemFriendlyName "Storage Spaces*" -PhysicalDisks $PhysicalDisks;

  • Create a thinly provisioned, mirrored virtual disk (storage space) on the pool created above. More information on virtual disk creation using PowerShell cmdlet is here.

New-VirtualDisk -StoragePoolFriendlyName CompanyData -FriendlyName DataWarehouse –ResiliencySettingName Mirror -Size 100GB -ProvisioningType Thin;

 We hope that you found the information above on how to use Server Manager to provision storage useful. For even more resources and further reading, please have a look at the list of links below.

 1)      Storage Spaces Frequently Asked Questions (FAQ)

 2)      Storage Cmdlets in Windows PowerShell

 3)      How to Configure a Clustered Storage Space in Windows Server 2012

 4)      Deploy Clustered Storage Spaces

The Case of the Mysterious Preseeding Conflicts

$
0
0

Hi folks, Ned Pyle here again. Back at AskDS, I used to write frequently about DFSR behavior and troubleshooting. As DFS Replication has matured and documentation grew, these articles dwindled. Recently though, one of the DFSR developers and I managed to find something undocumented:

A DFSR server upgrade where, despite perfect preseeding, files were conflicting during initial sync.

Sound interesting? Love DFSR debug logs? Have insomnia? Read on!

Background

It began with a customer who was in the process of swapping out their existing Windows Server 2008 R2 servers with Windows Server 2012. They needed access to the new data deduplication functionality in order to save disk space; these servers were replicating files written in batches by an application; the files would never shrink or delete, so future disk space was at a premium.

The customer was following the DFSR replacement steps documented in this article. To their surprise, they found that after they reinstalled the operating system (i.e. Part 5, “reinstall or upgrade”), the new servers were writing DFSR file conflict event 4412 for many of the files during initial sync.

Event ID:      4412

Task Category: None

Level:         Information

Keywords:     Classic

User:          N/A

Computer:      srv2.contoso.com

Description:

The DFS Replication service detected that a file was changed on multiple servers. A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the Conflict and Deleted folder.

 

Additional Information:

Original File Path: E:\rf1\1B\2B\0D\somefile.ned

New Name in Conflict Folder: somefile-{59F6007D-4D62-4ACF-9C42-3E293F94E74E}-v6391976

Replicated Folder Root: E:\rf1

File ID: {59F6007D-4D62-4ACF-9C42-3E293F94E74E}-v6391976

Replicated Folder Name: RF1

Replicated Folder ID: CE7DFF07-29C9-4FD6-BE33-91985C524AC5

Replication Group Name: RG1

Replication Group ID: E5643B3A-5E2D-440D-8C18-348E7FC9E08E

Member ID: EF793A1F-FFCB-459E-9A97-9AA5F265B8FC

Partner Member ID: 578628CB-11B6-4CC1-932A-788B37CFF026

This was theoretically impossible, because their special application:

  1. Only wrote to a single server, not all replication nodes
  2. Never modified or overwrote existing files

Since this a new OS and the new dedup feature was in the mix, the initial concern was that scheduled dehydrations were somehow altering the files that DFSR had not yet completed examining for initial replication. Perhaps the files appeared different between servers, and DFSR was deciding to force existing files to lose conflicts. Even more interestingly though, when we examined the files using DFSRDIAG FILEHASH, the file hashes were identical:

  • File Path: E:\rf1\1B\2B\0D\somefile.ned
  • Windows Server 2008 R2 file hash: 6691A27E-030CEFC2-5234258D-3D812539
  • Windows Server 2012 file hash: 6691A27E-030CEFC2-5234258D-3D812539
  • After dedup optimization file hash: 6691A27E-030CEFC2-5234258D-3D812539
  • After the conflict file hash: 6691A27E-030CEFC2-5234258D-3D812539

The only difference was the file attribute from the dedup reparse points as we would expect, and we knew Windows Server 2012 DFSR fully supports dedup and does not consider them differing files. The local conflicts were happening, in effect, cosmetically. It was pointless, and slowing initial sync slightly, but at least no data was being lost.

So why on Earth were we seeing this behavior?

Digging Deeper

We enabled DFSR debug logging’s most verbose mode and the customer performed a server replacement – we then waited to see our first conflict. What follows is a (greatly modified for readability) log analysis:

The sample downloaded file: somefile.ned:

DFSR is replicating in a file with the exact same name and path as an existing file on the downstream DFSR server:

20130115 19:17:07.342 5796 MEET  1332 Meet::Install Retries:0 updateName:somefile.neduid:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934gvsn:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934 connId:{752068BE-5AA9-4CD0-9EA4-C7220BDE47F4} csName:Rf1 updateType:remote

DFSR decides to download it using RDC cross-file similarity:

20130115 19:17:08.405 5796 RDCX   757 Rdc::SeedFile::Initialize RDC signatureLevels:1, uid:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934 gvsn:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934 fileName:somefile.ned fileSize(approx):557056 csId:{9CC90AD2-A99E-4084-8D32-16B1242BF45E} enableSim=1

It found similar files because the previous similarity info from the old Windows Server 2008 R2 replication still exists on the volume and DFSR was re-using it (more on this later):

20130115 19:17:08.498 5796 RDCX  1308 Rdc::SeedFile::UseSimilar similarrelated (SimMatches=8)uid:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934 gvsn:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099934 fileName:somefile.nedcsId:{9CC90AD2-A99E-4084-8D32-16B1242BF45E} (related: uid:{5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579 gvsn:{5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579 fileName:somefile.ned csId:{9CC90AD2-A99E-4084-8D32-16B1242BF45E})

DFSR decides that it’s going to use the file and checks to see if it is already staged (it’s not):

20130115 19:17:08.545 5796 STAG  4222 Staging::GetStageReaderOrWriter

+         fid           0x1000000800CFC

+         usn           0x27d2613f0

+         uidVisible    0

..

..

+         gvsn          {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         uid            {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         parent        {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9520714

..

+         hash          00000000-00000000-00000000-00000000

+         similarity    00000000-00000000-00000000-00000000

+         name          somefile.ned

+         Failed to get stage reader as the file is not staged

DFSR then stages the file and updates the hash and similarity information:

20130115 19:17:08.592 5796 CSMG  3585 ContentSetManager::UpdateHash LDB Updating ID Record:

+         fid           0x1000000800CFC

+         usn           0x27d2613f0

+         uidVisible    1

+         filtered      0

..

+         gvsn          {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         uid           {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         parent        {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9520714

..

+         hash          1CC352AE-916F21F8-1F4E69E4-51A835CA

+         similarity    06032621-083C3D3A-212D182C-0C0A233C

+         name          somefile.ned      

By doing this, DFSR also updates uidVisible, which is an indication that the file can replicate out (i.e. visible to other replicas). This makes sense because the file is in the similarity table and it therefore must have been staged in the past before, to be replicated out.

Now comes the turn to replicate in the “new” file that we are interested in, which is the same file with the same name, but of course a different UID (since when a server performs initial sync, it creates local UIDs for all the existing files). Its ID record has the uidVisible set to 1 and that leads to UidInheritEnabled returning FALSE:

20130115 19:17:16.748 5796 MEET  3369 Meet::UidInheritEnabled UidInheritEnabled:0 updateName:somefile.ned uid:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099940 gvsn:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099942 connId:{752068BE-5AA9-4CD0-9EA4-C7220BDE47F4} csName:Rf1

This means that we can’t inherit the UID - and therefore cannot simply update the database and move on - because the file has “been replicated out” from DFSR perspective and must therefore be a unique file. Even though it really hasn’t – DFSR just assumes so, because how else would the similarity table already know about it? When DFSR goes through the download process, it finds out that we have same file with different UIDs on a file that has UID visible already:

20130115 19:17:16.748 5796 MEET  6330 Meet::LocalDominates update:

+         present          1

..

+         gvsn             {30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099942

+         uid              {30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099940

+         parent           {65BDCD7F-9F8A-4FFD-B9C0-744D0405AFE5}-v7450758

..

+         hash             1CC352AE-916F21F8-1F4E69E4-51A835CA

+         similarity       06032621-083C3D3A-212D182C-0C0A233C

+         name             somefile.ned

+         related.record:

+         fid              0x1000000800CFC

+         usn              0x27d2613f0

+         uidVisible       1

+         filtered         0

..

+         gvsn             {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         uid              {5D37EFB0-1472-4AA2-B697-1942BB7DE29C}-v9524579

+         parent           {65BDCD7F-9F8A-4FFD-B9C0-744D0405AFE5}-v7450758

..

+         csId           {9CC90AD2-A99E-4084-8D32-16B1242BF45E}

+         hash           1CC352AE-916F21F8-1F4E69E4-51A835CA

+         similarity     06032621-083C3D3A-212D182C-0C0A233C

+         name           somefile.ned

Because of the different UIDs and the fact that the local one has UID visible already, DFSR generates the conflict:

20130115 19:17:16.748 5796 MEET  2989 Meet::InstallRename Moving out name conflicting file updateName:somefile.neduid:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099940 gvsn:{30CCEB24-696C-4315-A8E0-8C70EE025A44}-v8099942 connId:{752068BE-5AA9-4CD0-9EA4-C7220BDE47F4} csName:Rf1  

But since the files are truly the same, the conflict doesn’t really matter. DFSR is just making a pointless conflict that writes an event, but which an end-user would never worry about because nothing is different in the winning file.

Why did we already have similarity?

This boils down to a by-design DFSR behavior: if it finds any old similarity files, it uses them. Those special sparse files live under the <volume>\system volume information\dfsr and are called:

  • SimilarityTable_1
  • SimilarityTable_2
  • FileIDTable_1
  • FileIDTable_2

The FileIdTable files act in conjunction with the SimilarityTable files, and contain the file info that matches with the similarity table’s signature data; that way cross-file can traverse the similarity table for matching signatures and then look up the matching file ID records.

This customer was doing the right thing and following our steps to remove the previous data, just as the blog posts state. However, since these were hidden files and the root DFSR folder was not deleted, they were skipped, leaving the old similarity table behind. Just a simple oversight (I have since reviewed the DFSR hardware migration article and downloads to make sure this is 100% clear in the steps).

The Sum Up

Like many issues with complex distributed computing systems like DFSR, the law of unintended consequences rules. When Windows Server 2003 R2 DFSR was first designed more than ten years ago, no one was thinking hard about DFSR pre-seeding or upgrading, of course.

Always make sure that you thoroughly delete previous DFSR configuration files when following the DFSR hardware and OS replacement steps, and everything will be swell.

Until next time,

- Ned Pyle

DFSR Reparse Point Support (or: Avoiding Schrödinger's File)

$
0
0

Hi folks, Ned Pyle here again. We are occasionally asked which reparse points DFS Replication can handle, and if we can add more. Today I explain DFSR behaviors and why simply adding reparse point support isn’t cut and dry.

Background

A reparse point is user-defined data understood by an application. Reparse points are stored with files and folders as tags; when the file system opens a tagged file, the OS attempts to find the associated file system filter. If found, the filter processes the file as directed by the reparse data.

You may already be familiar with one reparse point type, called a junction. Domain controllers have used a few junction points since Windows 2000: in the SYSVOL folder. Any guesses on why? Let me know in the Comments section.

Another common junction is the DfsrPrivate folder. Since Windows Server 2008, the DfsrPrivate folder has used a reparse point back into the \System Volume Information\DFSR\<some RF GUID> folder.

You can see these using DIR with the /A:L option (the attribute L shows reparse points):

image

Or FSUTIL if you interested in tag details for some reason:

image

Enter DFSR

DFSR deliberately blocks most reparse points from replicating, for the excellent reason that tags can direct to data that exists outside the replicated folder, or to folder paths that don’t align between DFSR servers. For example, if I am replicating c:\rf2, you can see how these reparse point targets will be a problem:

image
Mklink is the tool of choice for playing with reparse points

We talk about support in the DFSR FAQ: http://technet.microsoft.com/en-us/library/cc773238(WS.10).aspx

Does DFS Replication replicate NTFS file permissions, alternate data streams, hard links, and reparse points?

  • Microsoft does not support creating NTFS hard links to or from files in a replicated folder – doing so can cause replication issues with the affected files. Hard link files are ignored by DFS Replication and are not replicated. Junction points also are not replicated, and DFS Replication logs event 4406 for each junction point it encounters.
  • The only reparse points replicated by DFS Replication are those that use the IO_REPARSE_TAG_SYMLINK tag; however, DFS Replication does not guarantee that the target of a symlink is also replicated. For more information, see the Ask the Directory Services Team blog.
  • Files with the IO_REPARSE_TAG_DEDUP, IO_REPARSE_TAG_SIS, or IO_REPARSE_TAG_HSM reparse tags are replicated as normal files. The reparse tag and reparse data buffers are not replicated to other servers because the reparse point only works on the local system. As such, DFS Replication can replicate folders on volumes that use Data Deduplication in Windows Server 2012, or Single Instance Storage (SIS), however, data deduplication information is maintained separately by each server on which the role service is enabled.

Different reparse points give different results. For instance, you get a friendly event log error for junction points:

image
Well, as friendly as an error can be, I reckon

A hard-linked file uses NTFS magic to tie various instances of a file together (I’ve talked about this before in the context of USMT). We do not allow DFSR to deal with all those instances, as the file can be both in and out of the replica set, simultaneously. Moreover, hard-linked files cannot move as hardlinks between volumes – even if you were just copying the files between the C and D drive yourself.

You probably don’t care about this, though; hardlinks are extremely uncommon and your users would have to be very familiar with MKLINK to create one. If by some chance someone did actually create one, you get a DFSR debug log entry instead of an event. For those that like reading such things:

20130122 17:27:24.956 1460 OUTC   591 OutConnection::OpenFile Received request for update:

+      present                         1

+      nameConflict                    0

+      attributes                      0x20

+      ghostedHeader                   0

+      data                            0

+      gvsn                            {85BFBD50-BC6D-4290-8341-14F8D64304CB}-v52<-- here I modified a hard-linked file on the upstream DFSR server

+      uid                             {85BFBD50-BC6D-4290-8341-14F8D64304CB}-v51

+      parent                          {3B5B7E77-3865-4C42-8BBE-DD8A15F8BC1E}-v1

+      fence                           Default (3)

+      clockDecrementedInDirtyShutdown 0

+      clock                           20130122 22:27:21.805 GMT (0x1cdf8efa5515cb2)

+      createTime                      20130122 22:25:49.736 GMT

+      csId                            {3B5B7E77-3865-4C42-8BBE-DD8A15F8BC1E}

+      hash                            00000000-00000000-00000000-00000000

+      similarity                      00000000-00000000-00000000-00000000

+      name                            hardlink4.txt

+      rdcDesired:1 connId:{FA95B57E-8076-47F6-B08A-768E5747B39E} rgName:rg2

 

20130122 17:27:24.956 1460 OUTC  4403 OutConnectionContentSetContext::GetUpdatedRecord Database is too out of sync with updateUid:{85BFBD50-BC6D-4290-8341-14F8D64304CB}-v51 connId:{FA95B57E-8076-47F6-B08A-768E5747B39E} rgName:rg2

 

20130122 17:27:24.956 1460 SRTR  3011 [WARN] InitializeFileTransferAsyncState::ProcessIoCompletion Failed to initialize a file transfer. connId:{FA95B57E-8076-47F6-B08A-768E5747B39E} rdc:1 uid:{85BFBD50-BC6D-4290-8341-14F8D64304CB}-v51 gsvn:{85BFBD50-BC6D-4290-8341-14F8D64304CB}-v52 completion:0 ptr:0000008864676210 Error: <-- DFSR warns that it cannot begin the file transfer on the changed file; note the matching UID that tells us this is hardlink4.txt

+      [Error:9024(0x2340) UpstreamTransport::OpenFile upstreamtransport.cpp:1238 1460 C The file meta data is not synchronized with the file system]

+      [Error:9024(0x2340) OutConnection::OpenFile outconnection.cpp:689 1460 C The file meta data is not synchronized with the file system]

+      [Error:9024(0x2340) OutConnectionContentSetContext::OpenFile outconnection.cpp:2562 1460 C The file meta data is not synchronized with the file system]

+      [Error:9024(0x2340) OutConnectionContentSetContext::GetUpdatedRecord outconnection.cpp:4436 1460 C The file meta data is not synchronized with the file system]

+      [Error:9024(0x2340) OutConnectionContentSetContext::GetUpdatedRecord outconnection.cpp:4407 1460 C The file meta data is not synchronized with the file system]<-- DFSR states that the file meta data is not in sync with the file system. This is true! A hardlink makes a file exist in multiple places at once, and some of these places are not replicated.

With symbolic links (sometimes called soft links), DFSR does support replication of the reparse point tags. DFSR sends the reparse point - without modification - along with the file. There are some potential issues with this, though:

1. Symbolic links can point to data that lies outside the replicated folder

2. Symbolic links can point to data that lies within the replicated folder, but along a different relative path on each DFSR server

3. Even though a Windows Server 2003 R2 DFSR server can replicate a symbolic link-tagged file in, it has no idea what a symbolic link tag is!

I documented the first case a few years ago. The second case is more subtle – any guesses on why this is a problem? When you create a symbolic link, you usually store the entire path in the tag. This means that if the downstream DFSR server uses a different relative path for its replicated folder, the tag will point to a non-existent path:

image

image
Alive and dead at the same time… like the quantum cat

The third case is becoming less of a possibility all the time; Windows Server 2003 R2 DFSR is on its way out, and we have steps for migrating off.

For all these reasons – and much like steering a car with your feet - using symbolic links is possible, but not a particularly good idea.

That leaves us with the special reparse points for single-instance storage and dedup. Since these tags are used merely to aid in the dehydration and rehydration of files for de-duplication purposes, DFSR is happy to replicate the files. However, since dedup is something done only on a per-volume, per-computer basis, DFSR sends the rehydrated file without the reparse tag to the other server. This means you will have to run dedup on all the replication partners in order to save that space. Dehydrating and rehydrating files on Windows Server 2012 does not cause replication.

Why making changes here is harder than it looks

Even though reparse point usage is uncommon, there are some cases where third party software vendors will use them. The example I usually hear is for near-line archiving or tiered storage: by traversing a reparse point, an application will use a filter driver to perform their magic. These vendors or their customers periodically ask to add support for reparse point type X to DFSR.

This puts us in a rather difficult position. Consider the ramifications of this change:

1. It introduces incompatibility into DFSR nodes, where older OSes will not understand or support a new data type, leading to replica sets that will never converge. This divergence will not be easily discoverable until it’s too late. Data divergence scenarios caused by topology design are very hard to communicate – i.e. for every administrator that reads the TechNet, KB, or blog post telling them why they cannot safely use certain topologies, many others will simply deploy and then later open support cases about DFSR being “broken”. Data fidelity is the utmost priority in a distributed file replication system.

This already happens with unsupported reparse points– I found 66 MS Support cases from the past few years with a quick search of our support case database, and that was just looking for obvious signs of the problem.

2. Even if we added the reparse point support and simply required customers use the latest version of Windows Server and DFSR, customers would have to replace all nodes simultaneously in any existing replicas. These can number in the hundreds. Even if it were only two nodes, they would have to remove and recreate the replication topology, and then re-replicate all the data. Otherwise, end users accessing data will find some nodes with no data, as they will filter out of replication on previous operating systems. This kind of “random” problem is no fun for administrators to troubleshoot, and if using DFS Namespaces to distribute load or raise availability, the problem grows.

3. Since we are talking about third party reparse point tags, DFSR would need a mechanism for allowing customers to add the tag types – we can’t hard-code non-Microsoft tags into Windows, obviously. There is nowhere in the existing DFSR management tools to specify this kind of setting, and no attribute in Active Directory. This means customers would have to hand-maintain the custom reparse point rules on a per-server basis, probably using the registry, and remember to set them as new nodes were added or replaced over time. If the new junior admin didn’t know about this when sent off to add replicas, see #1 and #2.

Distributed file replication is one of the more complex computing scenarios, and it is a minefield of unintended consequences. Data that points to other data is an area where DFSR has to take great care, lest you create replication black holes. This goes for us as the developers of DFSR as well as you, the implementers of complex topologies. I hope this article sheds some light on the picture.

Moreover, the next time you ding your software vendor for not supporting DFSR, check with them about reparse points – that very well may be the reason. Heck, they may have sent you this blog post!

Until next time,

- Ned “Reductio ad absurdum” Pyle

Safely Virtualizing DFSR

$
0
0

Hi folks, Ned here again. With the massive growth of virtualization, odds are you want to safely and reliably run Distributed File System Replication on Hyper-V, Xen, KVM, or VMware – heck, you may already be doing so.

With that in mind, I am here today to save your job! OK that might overstating things; let me try again: I am here today to save your file servers!

Multi-Master Background

You can virtualize many server workloads transparently without requiring any special considerations; the hypervisor just becomes a standard “hardware platform” within the company. When you deploy multi-master replication technologies, though - such as Active Directory Domain Services, the File Replication System, or DFSR – you must exercise greater care.

DFSR contains a database on every volume participating in file replication. This extensible storage engine (aka Jet) database records every replicated update with a set of metadata we call the “summary of knowledge”, or the “version vector”. This means that each server keeps track of:

UID

GVSN

Name

Parent

Globally Unique ID

Globally Unique Version

File or Folder Name

Globally Unique ID for parent

The UID and GVSN amount to:

  • A GUID that matches the DFSR database ID of a given server
  • The version high watermark on that database’s volume when we notice the file creation, update, or deletion.

You can see these records for a given file with the DFSRDIAG IDRECORD command and if you want to know which GUIDs originate from which servers, use the DFSRDIAG GUID2NAME command.

image
Clearly a file of critical importance

This version vector number (aka “Logical Clock”) never decrements and each DFSR server keeps track of other servers’ latest known version number as part of replication. While the UID never changes for a file, the GVSN updates with every file or folder alteration.

Consider table below that describes the database of “ServerA”: it lists a replicated folder named “rf1” with two files named “canary” and “cat” created on ServerA:

Name

UID

GVSN

Parent

RF1

ContentSet-v1

ServerA-v10

 

Canary.xlsx

ServerA-v11

ServerA-v11

ContentSet-v1

Cat.docx

ServerA-v12

ServerA-v12

ContentSet-v1

Note: When DFSR creates the first replicated folder, the UID of the replicated folder is marked with the GUID of the content set from Active Directory and “v1”, and the GVSN of the first server in the Replicated Folder and “v10”.

At this point, the downstream ServerB database only knows about the RF:

Name

UID

GVSN

Parent

RF1

ContentSet-v1

ServerA-v10

 

The servers then exchange version vectors. Since ServerB knows that ServerA has a high watermark of 12 and that ServerB is missing 11 through 12 from ServerA by looking in its own database, it requests those updates, replicates them, and updates its own database to match.

When ServerA replicates outbound to its partner ServerB – a process called “joining” it sends along the version vector chain “ServerA –> 11..12” then replicates those files in that chain:

image

When complete, both server databases look identical and ServerB knows of the latest changes from ServerA:

Name

UID

GVSN

Parent

RF1

ContentSet-v1

ServerA-v10

 

Canary.xlsx

ServerA-v11

ServerA-v11

ContentSet-v1

Cat.docx

ServerA-v12

ServerA-v12

ContentSet-v1

If ServerB originates a modification to cat.docx and a subsequent new file dog.txt later, it uses the high watermark of ServerB at that point and the process flows in the other direction, with ServerA then knowing the latest VV from ServerB. Moreover, if I add a “Fried” subfolder with a file called “chicken.rtf” to ServerA, this replicates to ServerB with the deeper parent relationship.

Now the databases on both servers look like this:

Name

UID

GVSN

Parent

RF1

ContentSet-v1

ServerA-v10

 

Canary.xlsx

ServerA-v11

ServerA-v11

ContentSet-v1

Cat.docx

ServerA-v12

ServerB-v65

ContentSet-v1

Dog.txt

ServerB-v66

ServerB-v66

ContentSet-v1

Fried

ServerA-V13

ServerA-V13

ContentSet-v1

Chicken.rtf

ServerA-V14

ServerA-V14

ServerA-V13

And the file system looks like this:

image
Mmmm… fried chicken…

The Big Cannoli – do not restore snapshots or whole machine backups

By now, you’re asking yourself “interesting, but… what does this have to do with virtualization?” Multi-master replication relies on this incrementing logical clock and – as any sci-fi buff knows– time travel carries a whole host of problems when going backwards and forwards.

So if you read nothing else in this blog post, read this:

  • Virtual Machine Saved States/Snapshots. When virtualizing DFSR, start the virtual machine, run DFSR and if you need to stop the virtual machine, fully shut down the guest OS. Do not use saved states or snapshots.
  • Backing Up Virtualized DFSR. When backing up virtualized DFSR, perform a guest side backup using a backup product that is VSS aware. Do not perform or restore from host side backups of virtualized DFSR servers.

Custom DFSR

If you don’t follow this guidance and attempt to use saved states/snapshots or attempt to restore a virtual machine using a custom host side restore solution, that VM will stop replicating forever. The server logs DFSR events 2212, 2104, 2004, and 2106. DFSR overwrites locally originating changes on a restored server as conflicts from other servers. The restored server and its partners will not be able to figure out what happened, as they do not ever expect the clock to go backwards. A regular backup restore of the database inside the VM using the internal VSS writer is the only way DFSR handles this correctly– and even that is rather overkill; easier to simply backup the files and restore them without worrying about the database at all. If you have to follow KB2517913, you are in a world of hurt and must work with MS Support to delete your DFSR Database structure on any restored servers, perhaps requiring some setting of Primary replication to ensure files flow in or out in the manner you need.

Don’t create snapshots of custom DFSR servers. If you need backups, back up the replicated files and the system state, by running your backup agents within the VM, not outside of it - just like a physical machine. If anyone tells you different, point to this article and wag your finger. If you absolutely must backup entire machines, restore the virtual disks, mount them offline and extract the files from for the restore – do not instantiate a running instance of the virtual machine (i.e. start them and run them). If you really must restore the entire machine to get it back in working order, understand that DFSR is now broken until its database is correctly recreated and the server will replicate inboundnon-authoritatively when you delete that database.

DCs and SYSVOL

The only time you can “safely” restore DFSR (and FRS) snapshots or whole machine backups is when using the Virtualized Domain Controller feature introduced in Windows Server 2012, and only for the built-in SYSVOL replica created automatically by domain controllers. Domain controllers understand a feature called VM-Generation ID and when they detect a snapshot restore or a whole machine restore (e.g. from DPM 2012), they delete the DFSR database that references SYSVOL, which forces replication to completely re-run inbound non-authoritatively. This also requires a hypervisor such as Windows Server 2012 Hyper-V or Microsoft Hyper-V Server 2012 that supports VM Gen-ID and a snapshot/backup solution that does as well, naturally.

Why is safely in quotes above? Because there is still risks when restoring DC snapshots or full machine backups:

1. If you restore all DCs in a domain, SYSVOL still stops replicating. There is no authoritative DC and all will be attempting non-auth sync simultaneously.

2. Any custom DFSR replicas on other volumes than the one containing SYSVOL stop replicating and require repair. Moreover, even though custom RFs on the same volume as SYSVOL “accidentally” recover non-authoritatively (they share the same database), they are likely to see perceived data loss from any changes not replicated out before the snapshot restored and eliminated them.

3. Even though SYSVOL replicating inbound non-authoritatively is generally ok - it does not usually have many changes and the changes mostly originate from one server, the PDC Emulator - group policies will certainly be disrupted on that node until sync completes. In addition, if it was the PDCE, recent GP changes that never replicated out will be lost forever just like in #2.

4. This requires homogenous virtualization of Windows Server 2012 or later. Windows Server 2008 R2 and older domain controllers do not support the VM Generation ID, so they just end up in USN rollback for AD with ruined SYSVOL databases.

Do I sound like a broken record? This is old news for AD administrators – they have been reading this for years in KB articles, TechNet articles, and blog posts, going all the way back to Virtual Server 2005 – but we want to clear on this critical guidance.

The Rest

What are the other recommendations around safely virtualizing DFSR servers?

Note: I can only speak to Microsoft Hyper-V in this article; please contact your third party hypervisor vendor details, capabilities, and limitations.

1. Microsoft recommends Hyper-V as the preferred hypervisor for virtualized DFSR. As stated previously, Windows Server 2012 Hyper-V/Microsoft Hyper-V Server 2012 both include a critical feature, VM Generation ID, designed to help alleviate issues with workloads such as Active Directory, File Replication Services and DFSR that don’t like to be time shifted. Furthermore, Microsoft develops, tests, optimizes and validates its products with Hyper-V as part of its Common Engineering Criteria.

Finally, Microsoft does not recommend using hypervisors that aren’t supported via the Microsoft Server Virtualization Validation Program (SVVP).

2. Avoid Snapshots and host side virtual machine backups – enough said on the matter. Restoring an entire VM from a saved state or a snapshot breaks custom DFSR. Use VSS backups inside the VM.

3. Reliable and high-performing hypervisor host disk subsystem that write through to disk– The host physical disk system must satisfy at least one of the following criteria to ensure the virtualized workload data integrity through power faults:

    • The system uses server-class disks (SCSI, Fiber Channel, etc.)
    • The system uses disks connected to a battery-backed caching host bus adapter (HBA)
    • The system uses a storage controller (for example, a RAID system) as the storage device
    • The system uses is protected by an uninterruptible power supply (UPS)
    • The system’s disk write-caching feature is disabled

Without these safeguards in place, database corruption during unexpected power loss becomes much more likely. When DFSR detects database corruption in Windows Server 2012 and older operating systems, it deletes the database and forces inbound non-authoritative sync.

When virtualizing DFSR with Hyper-V, use virtual disks attached to virtual SCSI controllers for the DFSR data (via a VM’s Settings page). Doing so means significantly improved I/O speeds, as virtual SCSI outperforms the virtual IDE stack (even without write caching). Keep in mind that Windows Server 2012 Hyper-V and earlier versions can only boot from the virtual disks attached to the virtual IDE controller. Thus, your virtual machines should be configured to:

       A. Boot the OS from a virtual disk attached to the virtual IDE controller.

       B. Attach one or more “data” disks to the virtual SCSI controller and only replicate data on those disks.

image
Why yes, that is my host name. I don’t have a big hardware budget, I am a PM…

When using Hyper-V, place your DFSR data on virtual disks attached via virtual SCSI!

4. Do not clone, export, or copy VMs– As you have seen in the first section, DFSR relies heavily on the certain unique aspects of a computer such as the volume and the database signatures. If you create multiple instances of DFSR servers then all of them will believe themselves to be the same server from a DFSR perspective – even if you were to rename the servers and rejoin them to the domain. Clones create black holes in the replication topology if you were to reuse these machines with the same local DFSR metadata (especially if you accidentally left them with the same name and IP – this is a possibility even with VDC). Always create new servers from Sysprepped images or fresh media and never create copies of existing servers.

5. P2V with care - While physical to virtual conversion of DFSR servers is supported using SCVMM or Disk2Vhd, you must not allow both computers to run simultaneously. With SCVMM, this means choosing “offline mode conversion” and with Disk2VHD it simply means being careful. In both cases, once you have a working VM, you must never allow the physical machine back on your network again (See #3).

6. Do not pause or save state VMs – while not intrinsically bad, a paused or saved VM is like one that is turned off: the longer you wait, the more divergent its replication gets. Wait too long and it will never replicate again, especially since Windows Server 2012 enables content freshness by default. Getting into the habit of pausing and saving VMs is like the habit of turning off physical machines – eventually, you are going to forget you did it on one of them.

7. Use multiple virtualization hosts – This is the first “all eggs in one basket” scenario. If you only use one host, there may come a day when that host croaks. Use Failover Clustering, Hyper-V Live Migration, or Hyper-V Replica to ensure that even if a hypervisor host is down, its workloads are not. If not deploying true high availability, make sure that you at least run VMs on more than one host machine. This is a (very) basic assurance that a single hardware failure will not stop all your file replication.

8. Secure your guests and hosts– Unlike the other best practices, this is not about reliability but true data safety:

    • Ensure that only your most trusted administrators have local administrator privileges on the hypervisor (i.e. with unfettered access to the virtual disks). It’s impossible to stop an administrator from simply copying a virtual disk onto an external disk for later attack, so the MS Support recommendation is legally bonding your staff against malfeasance and keeping good audit and physical security practices.
    • Your Hyper-V admins use the Hyper-V Administrators group and only have file system access to their own VMs.
    • Physical disks containing your virtual disk should use encryption, such as BitLocker (with a TPM and multiple protectors). Even better, ensure the hosts also use BitLocker Network Unlock so that they cannot easily walk out of the building. (It’s not supported to run BitLocker within a virtual machine, as you would then need to use Password Protectors, and rebooting thousands of servers on patch Tuesday would turn ugly.)

Last Words

Remember: the operating system, the domain controller, the virtual machines, DFSR, etc. – none of that matters to the business folks. All that matters is the data. That is where most companies make their money. The rest is just necessary cost, like desks and pens and a microwave in the break room. Everything must bend to the need for protecting that data from harm.

This may be a TechNet article someday, but in the meantime, use this blog post as your go-to guidance for virtualizing DFSR. If you find other issues, please raise them in the comments – I’m happy to add updates to the main article in the future.

Until next time,

- Ned “Don’t save state!” Pyle

HP StoreEasy and Windows Server 2012 SMB Performance

$
0
0

Hi folks, Ned here again with a quickie post. HP recently released their latest file server SMB testing results, where they used FSCT and IOMeter to collect performance and scalability data with Windows Server 2012 running on HP StoreEasy 5000 Storage NAS servers.

The whitepaper holds plenty of details, and here’s a good one to get you started:


clip_image002

Even the midline-drive equipped StoreEasy model handled 2,500 peak workload home folder users – and if you went for the top of the line enterprise-class disks, it stood up to 26,000 simultaneous users, staying at ~30% CPU utilization, even while anti-virus scanning.

clip_image004

In HP’s previous tests using Windows Server 2008 R2 on the X5000 G2 NAS, they reached 22,500 peak users with anti-virus running, and leveling at 60% CPU. The new data shows a 13.5% improvement, even before you consider that it comes with half the CPU usage. Even with no anti-malware running, the previous OS and hardware did not reach the same number of users nor the same low CPU time.

These tests focused on using Windows 7 clients, meaning SMB 2.1 and not the full gamut of SMB 3.0 features introduced in Windows 8. That’s perfectly OK; we realize that it’s early days and Windows 7 is still the 800-pound gorilla. We’re looking forward to HP’s next tests with an end-to-end SMB 3.0 stack.

Thanks very much to our partner Hewlett-Packard Company for working with us to perform these tests and share them with the public.

- Ned “hpster” Pyle


OEM Appliance OOBE update: Set up a 4-node cluster in 30 minutes!

$
0
0

Hi folks, Scott here to talk about a new OEM Appliance OOBE (Out-of-box-experience) update that is now available for Windows Server 2012 and Windows Storage Server 2012. Windows server manufacturers (OEMs) can leverage this deployment tool to help their end-users rapidly deploy new clusters, in as few as 30 minutes from power-on to continuously-available services.

In Windows Server 2012, we included the OEM appliance OOBE to enable our manufacturers to design these custom setup experiences for standalone servers and two node clusters. This included booting the machines, selecting language, keyboard and regional settings, server naming, passwords, joining a domain, preparing shared storage and creating the cluster. The clustered appliances can be deployed from a single pane of glass application running on any one of the nodes.

The Windows team just released KB2769588, and it enables support for 4-node cluster deployments that can optionally include new create a new domain controller and create a virtual switch for Hyper-V wizards. Manufacturers can get the update from the Windows Server OEM redistribution program, and it is on the hotfix server for testing purposes.

Windows package (KB2769588):
http://support.microsoft.com/kb/2769588

4-Node Cluster + DC VM configuration guide:
http://www.microsoft.com/en-us/download/details.aspx?id=38766

The original OEM Appliance OOBE configuration guide:
http://technet.microsoft.com/en-us/library/jj643306


Key Features:

    • OEMs can design 4-node cluster setup experiences using a new dynamic task control that is index based. The OEM-customizable WPF launch-pad application, Initial Configuration Tasks (ICT), supports the new control for configuring X number of tasks based on the index value supplied.
    • IP address discovery now supports as many nodes as you indicate in the registry key.
    • Windows Welcome automation supports multiple cluster nodes.
    • New domain controller setup process allows end-users to create a fresh domain controller in a Hyper-V virtual machine that can be used to support the cluster. After the shared storage is established, we will automatically migrate the DC into CSV storage so it can fail over the domain controller VM to all nodes.
    • A revised configure domain and cluster settings wizard enables users to specify their root domain name, static IP address, cluster management name, machine names and password for all of the nodes of the cluster.

Our tools take care of all the mundane tasks while the customer enjoys a cup of coffee. They won’t sit around long enough for that coffee to cool, because within 30 minutes their entire cluster can be configured and running highly-available or continuously-available services.

Key Scenarios:

Cluster in a box (CiB): OEMs use CiB designs to create a “branch in the box.” “business in a box” or “datacenter in a box.” CiB makes it easy for IT folks to deploy, since there is very little cabling to do and the storage and networking are already connected in the mid-plane. CiB is not required to take advantage of the OEM Appliance OOBE; OEMs can also leverage it for discrete server and storage deployments.

Hyper-V Server appliance: Rapidly deploy 2-, 3- or 4-node clusters for use as a Hyper-V host supporting many VMs.

Storage Server Cluster: Rapidly deploy 2-, 3- or 4-node clusters for use as a storage server that can offer continuously-available NFS, SMB and iSCSI services. Especially useful for saving Hyper-V VMs over SMB 3.0.


Sample Screenshots:

Users can configure all the NICs in all the server nodes from the ICT application:image

The Configure Settings section now includes a new Create virtual switches for Hyper-V wizard:image

The new domain and cluster settings wizard allows users to create a new domain controller virtual machine or use an existing domain:
clip_image004[5]


The Provision Cluster Storage section includes wizards for provisioning shared storage arrays and SANs using iSCSI, Fibre Channel, SAS, ATA, SSD or directly attached JBODs leveraging Windows Storage Spaces:
image

After the storage is configured, it will begin to appear in the storage section:image

 

The domain controller setup wizard asks just a few questions about the domain controller:image

And then it asks for the static IP address the user will assign to the domain controller:image


The OEM’s pre-set administrator password is replaced by the end-user:
image


We display a summary of the selections to the user:
image


Then we run a massive set of automated tasks, and about 10-15 minutes later you have a new domain controller and all the nodes of the cluster are joined to the domain:
image


Reboot all nodes:
image

 

After the customer clicks “Create,” the wizard verifies a few things:

image

Then the user can validate and create the cluster:

image


After running cluster validation, users get a summary of the testing results:
clip_image024

If users click the “view validation testing report” link, they get a nice report:

clip_image026

The user verifies the management name for the cluster:
clip_image028

OEMs can optionally include or remove this page to automatically set up the first highly available (HA) file-server role instance in the cluster.

clip_image030

The user decides which volume to put CSV on and host the domain controller VM on:
clip_image032

We display a summary of the selections to the user:
image


Finally the Cluster validation and setup wizard finishes the job:
image

 

 

OEM setup guide:
Download the guide here: http://www.microsoft.com/en-us/download/details.aspx?id=38766

Table of contents:
Release Notes. 4
New operations. 4
Known issues. 5
In this release. 6
Four-Node Cluster Setup. 6
Requirements for all configurations. 6
Additional requirements for using a domain controller virtual machine. 6
Setup for Using a Domain Controller Virtual Machine. 8
Customizing the End-User Installation Experience. 9
Appendix A: Registry Entries. 12
Appendix B: Installation File Package. 14
Appendix C: Sample Setup Script 16
Appendix D: Sample Setup Script for a Domain Controller VM.. 19
Appendix E: Sample Unattend.xml for the Domain Controller Virtual Machine. 21
Appendix F: Sample Unattend.xml for Use in the Cluster Nodes. 22


Soon I will be sharing some sample script updates that manufacturers use to customize the OOBE for particular scenarios. I can’t wait to hear your feedback and would love to hear about a cluster you setup using the OEM Appliance OOBE in about 30 minutes!

Cheers,
Scott M. Johnson
Program Manager
Hybrid Storage Services Team
Windows Server

Using Indications with the Windows Standards-Based Storage Management Service (SMI-S)

$
0
0

Indications are a mechanism used in the Common Information Model (CIM) to provide events from a CIM server to a client application. The storage service can use these events to make sure its cached information is up-to-date with the provider and the arrays it manages.

When the storage service is the single management point, the chances are pretty good that the cache will be reasonably current. But if multiple management points exist, such as more than one SMI-S client, more than one SMI-S provider managing a single array, or out-of-band mechanisms like vendor tools, the state of a managed array can change and the storage service’s view of the discovered objects will become out-of-sync. Since discovery is a time consuming operation, it’s best to not require refreshing the cache (Update-StorageProviderCache cmdlet) more often than necessary.

Some examples of state that might change:

  • Volumes can be created or deleted or their size might change
  • Volumes can be unmasked to servers or masked from them
  • New storage pools can be created or deleted, they can be expanded or they can run out of free space
  • The FriendlyName of an object can be changed Various objects might change status, from healthy to unhealthy

Indications can help keep the cache updated, but some assembly is required. Out of the box, the storage service will subscribe to indications and it will listen for the indications, but more steps are necessary to set up the security so that the system can receive them. I will try to make this as painless as possible by providing a PowerShell script to do most of the heavy lifting.

Keep in mind that if you don’t follow these steps, the storage service will still attempt to subscribe to indications and the provider will not be able to deliver them. At best, this produces lots of messages in the provider’s log files; at worst, the provider may not accept indications from other clients and we have even seen providers fail completely.

Configuration for Indications

The storage service implements an HTTPS listener using TCP port 5990, in accordance with DMTF requirements. There is no support for HTTP delivery of indications. The instructions below apply to Windows Server 2012 systems and require PowerShell.

At this time, you must use a certificate with the Common Name set to “msstrgsvc”. You could use a signed certificate if you have the ability to create one. Otherwise, use a self-signed certificate as demonstrated below.

I avoided wrap around lines by using the PowerShell continuation character ‘`’, but if you encounter syntax errors, make sure double quotes are just normal double quote characters and not typographic ones. Same for dashes.

Step 1:  Copy the following script to a file called configsmis.ps1:

# Install the Storage Service feature - safe if it is already installed

echo "Installing the feature - safe it is already installed"

Add-WindowsFeature WindowsStorageManagementService

 

# Create a firewall rule to allow incoming traffic on port 5990

echo "Adding a firewall rule for indications"

New-NetFirewallRule -DisplayName "CIMXML Indications" -Direction Inbound `

 –LocalPort 5990 -Protocol TCP –Enabled True –Action Allow `

 -Description "CIM-XML Indications come in on HTTPS"

 

#Generate a self-signed certificate – the Common Name must be msstrgsvc

echo "creating a self-signed certificate"

$name = new-object -com "X509Enrollment.CX500DistinguishedName.1"

$name.Encode("CN=msstrgsvc", 0)

 

$key = new-object -com "X509Enrollment.CX509PrivateKey.1"

$key.ProviderName = "Microsoft RSA SChannel Cryptographic Provider"

$key.KeySpec = 1

$key.Length = 2048

$key.SecurityDescriptor = "D:PAI(A;;0xd01f01ff;;;SY)(A;;0xd01f01ff;;;BA)(A;;0x80120089;;;NS)"

$key.MachineContext = 1

$key.Create()

 

# create a certificate that is suitable for all purposes

$serverauthoid = new-object -com "X509Enrollment.CObjectId.1"

$serverauthoid.InitializeFromValue("1.3.6.1.4.1.311.10.12.1")

$ekuoids = new-object -com "X509Enrollment.CObjectIds.1"

$ekuoids.add($serverauthoid)

$ekuext = new-object -com "X509Enrollment.CX509ExtensionEnhancedKeyUsage.1"

$ekuext.InitializeEncode($ekuoids)

 

# this script will create certificate that is good for 1000 days – you can make it

# longer if you choose. After that time, the storage service will stop accepting

# indications and may stop handling mutual auth

$cert = new-object -com "X509Enrollment.CX509CertificateRequestCertificate.1"

$cert.InitializeFromPrivateKey(2, $key, "")

$cert.Subject = $name

$cert.Issuer = $cert.Subject

$cert.NotBefore = get-date

$cert.NotAfter = $cert.NotBefore.AddDays(1000)

$cert.X509Extensions.Add($ekuext)

$cert.Encode()

 

$enrollment = new-object -com "X509Enrollment.CX509Enrollment.1"

$enrollment.InitializeFromRequest($cert)

$certdata = $enrollment.CreateRequest(0)

$enrollment.InstallResponse(6, $certdata, 0, "")

 

# Now find the cert we just created (it will be in ‘MY’) and hold on to the thumbprint

cd cert:

$thumbprint=(dir -recurse | where {$_.Subject -match "CN=msstrgsvc*"} | `

  Select-Object -Last 1).thumbprint

$thumbprint

 

# clear out old certificates for this port and add our new one for all IPv4 and IPv6

# addresses. Deletes might fail if there were no old certificates.

# Note that I tried to use Server 2012 PowerShell cmdlets but I couldn’t exactly match

# all the requirements with the existing ones, so some of the commands rely on netsh

 

Set-Alias netsh c:\Windows\System32\netsh.exe

echo "You can ignore delete failures if this is a first time conifugration"

 

netsh http delete sslcert ipport=0.0.0.0:5990

netsh http delete sslcert ipport=[::]:5990

 

netsh http add sslcert ipport=0.0.0.0:5990 certhash=$($thumbprint) `

  appid="{468e21d1-a4cb-4134-8d9f-800c5ff2086f}"

netsh http add sslcert ipport=[::]:5990 certhash=$($thumbprint) `

  appid="{468e21d1-a4cb-4134-8d9f-800c5ff2086f}"

 

# Apply an ACL to the port so that NETWORK SERVICE can bind to it.

# Delete might fail if the ACL was never applied before.

netsh http delete urlacl url=https://*:5990/

netsh http add urlacl url=https://*:5990/ user="NT AUTHORITY\NETWORK SERVICE"

 

# restart the storage service so it can bind to the port properly

# you will need to perform Update-StorageProviderCache if level was > 0

echo "Restarting the service. Remember to run Update-StorageProviderCache since this resets the cache."

Restart-Service MSStrgSvc

 

Step 2: From a PowerShell Administrative Command prompt, execute these commands:

PS C:\Users\Administrator> $policy = Get-ExecutionPolicy

PS C:\Users\Administrator> Set-ExecutionPolicy Unrestricted

PS C:\Users\Administrator> .\configsmis.ps1 (assuming this is the correct directory)

PS C:\Users\Administrator> Set-ExecutionPolicy $policy

Test indications to port 5990

Before sending indication to port 5990, you may want to verify that HTTPS listener is working properly. Open https://localhost:5990 on the storage server machine in IE and it will show “MSP Storage Project” (choose to ignore server certificate error).

Note: Hit cancel if prompted for a certificate.

Test that the listener can be reached by pointing your web browser to the storage service machine and port 5990 (e.g., https://<storage-servername>:5990. You should see a certificate error which you can ignore for this test – this is sufficient to show that you have reached the indication listener. 

One-way authorization

The storage service only supports HTTPS connections for indications. When a CIM server attempts to connect to the storage service to deliver an indication, it will challenge the storage service and request a certificate. If the provider validates the server certificate, that certificate will need to be placed in the trusted store for the CIM server. This will vary depending on vendor implementation and is not described here.

 Very often the CIM server does little or no validation of the certificate and no other action is required.

Mutual Authentication

 Not currently supported. 

Exporting the Storage Service certificate

The above script will place the msstgsvc certificate in the trusted store. Using the certificates snap-in, it is possible to export this cert for use by the provider if it does certificate validation. Make sure to export in a format the provider can understand, typically .DER or .P7B, and don’t include the private key if you are asked. 

Registry values that affect Indications

 All registry values are located in

 HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\Current Version\Storage Management

 The defaults enable indications with certificate checking so no changes should be needed.

 

Value

Description

Default

DisableHttpsCommonNameCheck

Turns off Common Name (CN) checking

0 (CN does not need to match the provider machine name)

EnableHTTPListenerClientCertificateCheck

Turns on client certificate checking

1 (Enabled) The Indication provider must present a valid certificate

EnableIndications

Turns on indication subscriptions

1 (Enabled)

IndicationDeliveryFailurePolicy

See CIM_IndicationSubscription.On FatalErrorPolicy

4 (Remove) The subscription will attempt to set this to Remove. If that fails, it will retry without setting this property.

 

 

 

Storage Service (SMI-S) Tracing and Logging

$
0
0

Microsoft added the ability to manage storage using Storage Management Initiative (SMI-S) providers to Windows Server 2012. Sometimes things don't go quite the way you plan and some debugging is needed to figure out what is going on.

So in order to provide better ability for storage vendors and customers to debug problems encountered with SMI-S providers, the Windows Standards-Based Storage Management Service (I’ll refer to it as Storage Service) offers a tracing facility as well as various other logging options.

CIMXML Logging

If you followed my earlier blogs, you would have learned that SMI-S is an implementation of the Common Information Model (CIM) encoded in XML format (CIM-XML) and transported using the Hypertext Transport Protocol (HTTP).

It is possible to save the requests and responses in a log file so storage vendors can see what requests are issued and what responses were captured by the Storage Service. Most SMI-S providers have similar capabilities and the logs on either side can be compared if problems are encountered.

Normally, this form of logging is not enabled but it can be turned on fairly easily when necessary. It will have some (small) impact on performance, and the log file can get quite large. Note that no security information is recorded in this log so user names and passwords are never captured.

There are a few registry values that control CIMXML logging, all under the key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Storage Management\CIMXMLLog

Value

Description

Default

LogLevel

Set this to 4 to enable logging; 0 to disable

0 (disabled)

LogFileName

Name of the log file, including the complete path. The NETWORK SERVICE account must have write permission to the directory and file.

%systemroot%\temp\cimxmllog.txt

MaxLogFileSize

Maximum size of the log file in bytes. When the log exceeds this size, it will be renamed with a .BAK extension and a new file will be opened. At most two files can be saved.

0x4000000

 

TIP: Although the default file name ends in .txt, I usually change the name to have a .xml extension. When the file is opened in a text editor like Notepad++, the editor applies color coding and allows collapsing and expanding sections of the log, as if it was XML (it’s pretty close, just the timestamps aren't).

The easiest way to enable CIMXML logging is using PowerShell. Open an administrative PowerShell window and use these cmdlets (no line break in the Set-ItemProperty cmdlet):

Set-ItemProperty -Path "HKLM:SOFTWARE\Microsoft\Windows\CurrentVersion\Storage Management\CIMXMLLog" -Name "LogLevel" -Value "4" -Force

Set-ItemProperty -Path "HKLM:SOFTWARE\Microsoft\Windows\CurrentVersion\Storage Management\CIMXMLLog" -Name "LogFileName" -Value "C:\Temp\cimxml.xml" -Force

You may need to restart the service for the change to take effect.

Restart-Service MsStrgSvc

After restarting the storage service, you will have to do a rediscovery operation (Update-StorageProviderCache) to the appropriate level. To disable:

Set-ItemProperty -Path "HKLM:SOFTWARE\Microsoft\Windows\CurrentVersion\Storage Management\CIMXMLLog" -Name "LogLevel" -Value "0" -Force

Restart-Service MsStrgSvc

Note: Logging is only available for SMI-S providers communicating through CIMXML, which is the most common transport. The Storage Service can also communicate with providers implemented under the Windows Management Instrumentation (WMI) framework. This logging function is not available for SMI-S WMI providers.

Windows Event Log

All provisioning activities are recorded in the StorageManagementService Event Log channel (which is enabled by default). Errors are also recorded here. The following sample shows a typical activity that was recorded when I registered a provider using HTTPS. You can see that the user name is logged so auditing of actions carried out through the Storage Service is possible.

Tracing

The Storage Service uses Event Tracing for Windows (ETW) which allows narrowing down issues without using a debugger or source code. In many code paths, the tracing capability has the ability to record unexpected error and warning conditions or informational messages in an Event Tracing Log (.etl) file. Tracing to a file can be enabled by using the logman command line.

Real-time tracing is possible using the TraceView utility (part of the Windows Driver Kit) and the appropriate symbol files. I mention this here so you can see that we have built in a lot of diagnostic capabilities into the service. Typically this information is only valuable to developers. However the public symbols do not contain the tracing information, so this is mostly useful if you need to open a support case.

To enable tracing, use the following command (from a command prompt, not PowerShell):

logman create trace "ETWTrace" -ow -o c:\ETWTrace.etl -p {18C2F19C-F79D-408F-837B-F0B23F20A0F7} 0x3f 0x5 -nb 16 16 -bs 1024 -mode Circular -f bincirc -max 4096 -ets

To stop tracing

logman stop "ETWTrace" –ets

 

 

Server for NFS in Windows Server 2012

$
0
0

In this introductory NFS blog post, let me provide you with an overview of Server for NFS feature implementation in Windows Server 2012.   Native NFS support with Windows Server started with Windows Server 2003 R2 and has evolved over time with continuous enhancement in terms of functionality, performance, and manageability.

 

Windows Server 2012 takes the support for the Server for NFS feature to a new level. The following are some of the highlights:

 

1.  NFSv4.1 support : Support for the latest NFS version 4.1 is one of the major highlights with Windows Server 2012. All the mandatory aspects of RFC 5661 are implemented. NFSv4.1 protocol provides enhanced security, performance, and interoperability over NFSv3.

2.  Outoftheboxperformance : Byutilizingthenew nativeRPC-XDRtransportinfrastructure, optimalNFSperformancecanbeexpectedrightoutoftheboxwithouthavingtotuneanyparameters. Thisfeatureprovidesauto-tunedcacheandthreadpools along with dynamic resource management basedontheworkload. FailoverpathswithinNFSserverhavebeentunedforbetterperformance.

3.  Easier deployment and manageability : Improvements are made on many fronts in terms of ease of deployment and manageability. To name a few:

a.       40+ Windows PowerShellcmdletsforeasierNFSconfiguration,  managementofshares

b.      Betteridentitymappingwithlocalflatfilemappingstore and Windows PowerShell cmdlets

c.      Simplergraphicaluserinterface

d.      NewWMIv2provider

e.      RPCportmultiplexer (port2049) for firewall-friendliness

4.   NFSv3 availability improvements :  Fast failovers with new per-physical disk resource and tuned failover paths within NFS server. Network Status Monitor (NSM) notifications are sent out after a failover so that clients no longer need to wait for TCP timeouts to reconnect. This essentially means that NFSv3 clients can have fast and transparent failovers with more uptime (reduced downtime).

 

In summary, Windows Server 2012 delivers improvements in terms of ease of deployment, scalability,  stability, availability, reliability, security and interoperability. Shares can be simultaneously accessed over SMB & NFS protocols. All these allow you to deploy Windows Server 2012 as a file server or a storage server in any demanding cross-platform environments.

 

We will be following-up with a number of detailed blog posts addressing the above listed features, which highlight Server for NFS feature and interoperability scenarios in Windows Server. So, stay tuned..

 

By the way, you can reach NFS/Windows related blogs using the URL http://aka.ms/nfs

 

Feedback for Microsoft NFS Team : nfsfeed@microsoft.com

Data Classification Toolkit for Windows Server 2012 Now Available

$
0
0

Get the most out of Windows Server 2012 with new features that help you to quickly identify, classify, and protect data in your private cloud!

The Data Classification Toolkit supports new Windows Server 2012 features, Dynamic Access Control, and backward compatibility with the functionality in the previous version of the toolkit. The toolkit provides support for configuring data compliance on file servers running Windows Server 2012 and Windows Server 2008 R2 SP1 to help automate the file classification process, and make file management more efficient in your organization.

Download the Data Classification Toolkit for Windows Server 2012

 

Server for Network File System First Share End-to-End

$
0
0

Introduction

Server for Network File System (NFS) provides a file-sharing solution for enterprises that have a mixed Windows and UNIX environment. Server for NFS enables users to share and migrate files between computers running the Windows Server 2012 operating system using SMB protocol and UNIX-based computers using the NFS protocol.

Today we will go through the process of how to provision a Server for NFS share on Windows Server 2012.   Note that provisioning on a Clustered Share Volume (CSV) and on ReFS is not supported in this release. This is based on NTFS volumes. The scenario we describe involves:

  • Install the Server for NFS role on the target Windows Server 2012 machine.
  • Provisioning a pre-existing directory c:\share on an NTFS volume with export name “share”.

We will cover this process step by step in two different ways, namely PowerShell cmdlet and server manager UI. Following sections will introduce them one by one.

PowerShell cmdlet Setup

Server for NFS is a server role available on Windows Server 2012 operating system.

Step 1: Install the Server for NFS role

From the PowerShell cmdlet run the following command to make this server to also act as a NFS server:

Add-WindowsFeature FS-NFS-Service

Step 2: Provision a directory for NFS Sharing

Authentication method, user identity mapping method, and permission of a Server for NFS share need to be configured when provisioning a directory for NFS sharing. The following PowerShell cmdlet provisions a new share with "auth_sys" authentication, unmapped access and with read-only permissions:

New-NfsShare –Name share –Path c:\share –Authentication sys –EnableUnmappedAccess $True –Permission readonly


The concepts and settings of user mapping as well as authentication methods are covered in blog post "NFS Identity Mapping in Windows Server 2012" at http://blogs.technet.com/b/filecab/archive/2012/10/09/nfs-identity-mapping-in-windows-server-2012.aspx.

The concepts and settings of Kerberos authentication in detail is covered in blog post “How to NFS Kerberos Configuration with Linux Client” at http://blogs.technet.com/b/filecab/archive/2012/10/09/how-to-nfs-kerberos-configuration-with-linux-client.aspx.

The concepts and settings of permission is covered in blog post “How to Perform Client Fencing” at http://blogs.technet.com/b/filecab/archive/2012/10/09/how-to-perform-client-fencing.aspx.

UI based Setup

Step 1: Install the Services for NFS role

In Server Manager, choose Add Roles and Features from Manage menu item (Figure 1).


Figure 1

Figure 2

This action pops up the Add Roles and Features Wizard (Figure 2). Press Next button to continue.


Figure 3

Select Role-based or feature-based installation radio button. Then click Next button to move to the next page.

Figure 4

After that, we select the destination server where we plan to deploy NFS server (Figure 4). Select Select a server from the server pool radio button, and choose the destination server. In our example we choose the server “nfsserver” as destination server. Click Next button to continue the process.

Figure 5

In this step, we select the server role Server of NFS check box from Roles’ tree view under File And Storage Services -> File services (Figure 5).

 

Figure 6

A confirmation pop-up window will arise (Figure 6). Follow its default setting and click Add Feature button.


Figure 7

After that, we will come back to the Select server roles step (Figure 5).  Press Next button to switch to the Select features page (Figure 7). In this page, we skip all feature settings and press Next button.


Figure 8

This is the last page of setting up NFS server role. Just click Install button to perform NFS server role setup. This process may take a while and you can always close the setup page and the process will run in background.

Step 2: Provision a directory for Server for NFS Share

 


Figure 9

Go back to the dashboard of Server Manager, and choose File and Storage Service (Figure 9).


Figure 10

In this page, select the server from Servers, and click Shares (Figure 10).


Figure 11

In this page, click the link “To create a file share, start the New Share Wizard” to start the New Share Wizard (Figure 11).


Figure 12

After the New Share Wizard pops up, select “NFS Share - Quick” and click the Next button.


Figure 13

In this page, we customize the target folder we plan to share (Figure 13). In our example, we select the path c:\share. Click Next to go to next page.


Figure 14

Given a name of that NFS share, the wizard will generate the remote path of this share (Figure 14). In our case, the share name is “share”, and the remote (exported) path is “nfsserver:/share”. Click Next button to continue.


Figure 15

Now we enter the authentication page (Figure 15). Choose “No server authentication” for "auth_sys" authentication method and allow unmapped user access by selecting “Enable unmapped user access” and “Allow unmapped user access by UID/GID”.

The concepts and settings of unmapped access and authentication methods are covered in blog post "NFS Identity Mapping in Windows Server 2012" at http://blogs.technet.com/b/filecab/archive/2012/10/09/nfs-identity-mapping-in-windows-server-2012.aspx.

The concepts and settings of Kerberos authentication method in detail is covered in blog post “How to NFS Kerberos Configuration with Linux Client” at http://blogs.technet.com/b/filecab/archive/2012/10/09/how-to-nfs-kerberos-configuration-with-linux-client.aspx.

Click Next to move on to the next page.

Figure 16

Add share permission by first click the Add button (Figure 16).

 

Figure 17

We assign read permission to all machines by choosing “Read Only” from share permissions and click Add button to add this permission (Figure 17). Then click Next button two times to the confirmation page (Figure 18). Click Create button to confirm the share creation process. The concepts and settings of permission will be covered in blog post “How to Perform Client Fencing” at http://blogs.technet.com/b/filecab/archive/2012/10/09/how-to-perform-client-fencing.aspx.

Figure 18

Click Create button and the wizard completes the share creation. After completion, close the wizard.

Feedback

Please send feedback you might have to nfsfeed@microsoft.com

How to: NFS Kerberos Configuration with Linux Client

$
0
0

In this tutorial, we will provision NFS server provided by “Server for NFS” role in Windows Server 2012 for use with Linux based client with Kerberos security with RPCSEC_GSS.

Background

Traditionally NFS clients and servers use AUTH_SYS security. This essentially allows the clients to send authentication information by specifying the UID/GID of the UNIX user to an NFS Server. Each NFS request has the UID/GID of the UNIX user specified in the incoming request. This method of authentication provides minimal security as the client can spoof the request by specifying the UID/GID of a different user. This method of authentication is also vulnerable to tampering of the NFS request by some third party between the client and server on the network.

RPCSEC_GSS provides a generic mechanism to use multiple security mechanisms with ONCRPC on which NFS requests are built (GSS mechanism is described in RFC 2203). It introduces three levels of security service: None (authentication at the RPC level), Integrity (protects the NFS payload from tampering), and Privacy (encrypts the entire NFS payload which protects the whole content from eavesdropping).

Server for NFS server role (can be found within server role “File And Storage Services” under path “File And Storage Services /File and iSCSI Services/Server for NFS”) provides NFS server functionality that ships with Windows Server 2012. Server for NFS supports RPCSEC_GSS with Kerberos authentication, including all three levels of RPCSEC_GSS security service: krb5 (for RPCSEC_GSS None), krb5i (for RPCSEC_GSS Integrity), and krb5p (for RPCSEC_GSS Privacy) respectively.

Explaining how to set up Kerberos security between a Linux client and a Windows server running Server for NFS can best be accomplished by way of a simple example. In this tutorial we'll consider the following infrastructure scenario:

  • Windows domain called CONTOSO.COM running Active Directory on a domain controller (DC) named contoso-dc.contoso.com.
  • Windows server running Server for NFS with host name: windowsnfsserver. contoso.com
  • Linux client machine running Fedora 16 with host name: linuxclient. contoso.com
  • Linux user on Fedora 16 client machine: linuxuser
  • Windows user that mapped Linux user on Fedora 16 client machine: CONTOSO\linuxclientuser-nfs
  • Kerberos encryption: AES256-CTS-HMAC-SHA1-96

For the purpose of configuration, we assume that the Linux client is running Fedora 16 with kernel version 3.3.1. Windows server is running Windows Server 2012 with server for NFS role installed. DC is running Windows Server 2012 with DNS Manager, Active Directory Administrative Center and “setspn” command line tool installed.

Configuration Steps

In this section, we will go through 3 steps for the purpose of enable NFS with Kerberos authentication:

  1. Basics
  2. Set up Linux machine with Kerberos authentication.
  3. Provision NFS share on Windows Server 2012 with Kerberos authentication.

In step 1, we are going to check DNS and make sure that both NFS and RPCGSS are installed on Linux machine. In step 2, we are going to set up the Linux machine to join Windows domain.  After that, we will configure service principal name (SPN) for Kerberos and distribute SPN generated key to Linux machine for authentication.

Step 1: Basics

First, make sure that DNS name resolution is working properly using between the DC, the Windows NFS Server, and the Linux client. One caveat for the Linux client is that the hostname should be set to its fully qualified domain name (FQDN) in the Windows domain. Running “hostname” on Linux machine and check whether host name is correct. (In command boxes, bold text is the command we type in and its result shows in normal style without bold.):

[root@linuclient]# hostname

linuxclient.contoso.com

Details of setting hostname for Fedora 16 machine can be found in Fedora 16 Doc with URL: http://docs.fedoraproject.org/en-US/Fedora/16/html/System_Administrators_Guide/ch-The_sysconfig_Directory.html#s2-sysconfig-network.

Also make sure that NFS and RPCGSS module are successfully installed and started up in this Linux machine. Following example shows how to use “yum” patching tool to install NFS on Fedora 16 client machine:

[root@linuxclient]# yum install nfs-utils

and load Kerberos 5 by run:

[root@linuxclient]# modprobe rpcsec_gss_krb5

and start rpcgss service by run:

[root@linuxclient]# rpc.gssd start

 

 

Step 2: Set up Linux machine with Kerberos authentication

Step 2.1: Add Linux machine to DNS in DC

In this step, we need to log into the DC and add an entry to the DNS Manager as follows:


Figure 1

The IP address of Linux client can be found by running “ifconfig” command in Linux terminal. In our case, we stick to Ipv4 address, the IP address of our Linux client machine is “10.123.180.146”.

Reverse DNS mapping can be verified by command “dig –x 10.123.180.146” from Linux side, where “10.123.180.146” should be replaced with the actual IP address of your Linux machine. DNS settings may need time to propagating among DNS servers. Please wait a while until dig command returns the right answer.

Step 2.2: Join Linux machine to the domain

Now we're going to configure Linux client to get Kerberos tickets from the Windows domain it is going to join (in our case “CONTOSO.COM”). This is done by editing the “/etc/krb5.conf” file. There should be an existing file with some placeholders which can be edited. We're going to add two lines under “[libdefaults]” for “default_realm” and “default_tkt_enctypes”. We're also going to add a realm in “[realms]” filling in the following fields: “kdc”, “admin_server”. Moreover, we are going to add two lines in the “[domain_realm]” section.

The end result should look something like (text we added is marked in Italic):

 

[libdefaults]

default_realm = CONTOSO.COM

dns_lookup_realm = false

dns_lookup_kdc = false

ticket_lifetime = 24h

renew_lifetime = 7d

forwardable = true

default_tkt_enctypes = aes256-cts-hmac-sha1-96

 

[realms]

CONTOSO.COM = {

 kdc =
contoso-dc.contoso.com

 admin_server = contoso-dc.contoso.com

}

 

[domain_realm]

.contoso.com = CONTOSO.COM

contoso.com = CONTOSO.COM

 

 

Step 2.3: Configure Kerberos service principal name

I'll explain a bit how authentication works from the NFS standpoint. When a Linux client wants to authenticate with Windows NFS server by Kerberos, it needs some other "user" (called a "service principal name" or SPN in Kerberos) to authenticate with. In other words, when a NFS share is mounted, the Linux client tries to authenticate itself with a particular SPN structured as “nfs/FQDN@domain_realm”, where “FQDN” is the fully qualified domain name of the NFS server and “domain_realm” is the domain where both Linux client and Windows NFS have already joined.

In our case, Linux client is going to look for “nfs/windowsnfsserver. contoso.com@CONTOSO.COM”. For this SPN, we're just going to create it and link it to the existing “machine” account of our NFS as an alias for that machine account. We run the “setspn” command from command prompt on DC to create SPN:

setspn –A nfs/windowsnfsserver windowsnfsserver

setspn –A nfs/windowsnfsserver.contoso.com windowsnfsserver

 

You can refer following articles to know more about SPN and “setspn” command.

http://msdn.microsoft.com/en-us/library/aa480609.aspx

User on Linux client will use the same style (i.e. nfs/FQDN@domain_realm where “FQDN” is the FQDN of the Linux client itself) as its own principal to authenticate with DC. In our case, principal for Linux client user is “nfs/linuxclient.contoso.com@CONTOSO.COM”. We're going to create some user in AD representing this principal, but “/” is not a valid character for AD account names and we cannot directly create an account which looks like “nfs/FQDN”. What we are going to do is to pick a different name as account and link it to that principal.  On DC, we create a new user account in Active Directory Administrative Center (Figure 2) and set up a link between this account and Kerberos SPN through “setspn” tool as we did for NFS server SPN.


Figure 2

In our case, both first name and full name are set to “linuxclientuser-nfs”. User UPN logon is “nfs/linuxclient.contoso.com@CONTOSO.COM”. User SamAccountName is set to contoso\linuxclientuser-nfs. Be sure to choose the correct encryption options, namely “Kerberos AES 256 bit encryption” and “Do not require Kerberos pre-authentication”, to make sure AES encryption works for GSS Kerberos. (Figure 3)


Figure 3

Now, we're going to set the SPNs on this account by running the following command in DC’s command prompt:

setspn –A nfs/linuxclient linuxclient-nfs

setspn –A nfs/linuxclient.contoso.com linuxclient-nfs

 

Fedora 16 Linux client needs to use the SPN without actually typing in a password for that account when doing mount operation. This is accomplished with a "keytab" file.

We're going to export keytab files for these accounts. On DC run following command from command prompt:

ktpass –princ nfs/linuxclient.contoso.com@CONTOSO.COM–mapuser linuxclientuser -nfs –pass [ITSPASSWORD] –crypto All –out nfs.keytab

“[ITSPASSWORD]” needs to be replaced by a real password chosen by us. Then copy nfs.keytab to Linux client machine. On Linux client machine we're going to merge these files in the keytab file. From the directory where the files were copied, we run "ktutil" to merge keytabs. In this interactive tool run the following commands:

[root@linuxclient]# ktutil

rkt nfs.keytab

wkt /etc/krb5.keytab

q

Great, now Linux client should be able to get tickets for this account without typing any passwords. Test this out:

kinit –k nfs/linuxclient.contoso.com

 

Note that Linux client will try three different SPNs (namely host/linuxclient, root/linuxclient, and nfs/linuxclinet) to connect to NFS server.  Fedora 16 will go through keytab file we generated from DC and find those SPNs one by one until the first valid SPN is found, so it is enough for us to just configure “nfs/linuxclient” principal. As a backup plan, you may try to configure other SPNs if “nfs/linuxclient” does not work.

Step 3: Provision NFS share on Windows Server 2012 with Kerberos authentication and Test NFS Kerberos v5 from Linux

Now we can create windows share with Kerberos v5 authentication and mount that share from Linux client. We can approach this by run PowerShell command:

New-NfsShare –Name share –Path C:\share –Authentication krb5,krb5i,krb5p -EnableAnonymousAccess 0 –EnableUnmappedAccess 0 –Permission readwrite

 

More details about how to setup NFS share could be found in blog post “Server for Network File System First Share End-to-End” at http://blogs.technet.com/b/filecab/archive/2012/10/08/server-for-network-file-system-first-share-end-to-end.aspx.

Now we are going to mount that share from Linux machine through NFS V4.1 protocol. On Linux client run:

[root@linuxclient]# mount –o sec=krb5,vers=4,minorversion=1 windowsnfsserver:/share  /mnt/share

 

In “sec” option, we can choose different quality of service (QOP) from “krb5”, “krb5i”, and “krb5p”. In “vers” option, we can choose to mount the share through NFS V2/3 protocol by replacing “vers=4,minorversion=1” to “vers=3” for NFSv3 or “vers=2” for NFSv2. In our case, “/mnt/share” is the mount point we choose for NFS share. You may modify it to meet your need.

After that, we can get access to mounted position from a normal linux client user by requiring the Kerberos ticket for that user. In our case, we run kinit from linuxuser user on Linux machine:

[linuxuser@linuxclient]# kinit nfs/linuxclient.contoso.com

 

Note that we do not need keytab to visit mounted directory, so we do not need to specify “-k” option for kinit. That linux user we run “kinit” should have privilege to read key tab file “krb5.keytab” under path “/etc”. All actions performed by linuxuser will then be treated as the domain user linuxclientuser-nfs on Windows NFS server.

Notes

RPCGSS Kerberos with privacy

RPCGSS Kerberos with privacy does not work with current release of Fedora 16 because of a bug reported here:  

https://bugzilla.redhat.com/show_bug.cgi?id=796992

You can refer it to find the patch in Fedora patch database to make it work after they fix it.

NFS Kerberos with DES Encryption

Windows domain uses AES by default. If you choose to use DES encryption, you need to configure the whole domain with DES enabled. Here are two articles telling you how to do that:

http://support.microsoft.com/kb/977321

http://support.microsoft.com/kb/961302/en-us

The Windows machine must also set the local security policy to allow all supported Kerberos security mechanisms. Here is an article talking about how to configure Windows for Kerberos Supported Encryption as well as what encryption types we have for Kerberos:

http://blogs.msdn.com/b/openspecification/archive/2011/05/31/windows-configurations-for-kerberos-supported-encryption-type.aspx

After enabling DES on domain/machines/accounts passwords on accounts must be reset to generate DES keys. After that, we can follow the same configuration steps in previous section to mount NFS share with Kerberos. There is one exception that we need to add one additional line to “[libdefaults]” section of “/etc/krb5.conf” to enable “weak crypto” just like DES:

allow_weak_crypto= true

Troubleshooting

DNS look up failure

DNS server need time to propagate Linux client host names, especially for complicate subnet with multi-layers of domains. We can do some trick by specifying DNS lookup server priority on Linux client by modifying /etc/resolv.conf: 

# Generated by NetworkManager

domain contoso.com

search contoso.com

nameserver: your preferred DNS server IP

Kerberos does not work properly

The Linux kernel's implementation of rpcsec_gss depends on the user space daemon rpc.gssd to establish security contexts. If Linux fails to establish GSS context, this daemon is the first place for troubleshooting.

First, make sure that rpcsec_gss is running. Run “rpc.gssd –f –vvv”

[root@linuxclient]# rpc.gssd –f –vvv

beginning poll

 

Ideally, the terminal will be blocked and polling GSS requests. If it stops right after running that command, you’d better reboot Linux. rpc.gssd itself is also a source of debugging Kerberos context switch. It will print out result of each Kerberos authentication steps and their results.

NFS Access Denial

The most error message from mounting NFS share from Linux is access denial. Unfortunately, Linux terminal does not provide additional clue of what causes failure. Wireshark is a nice tool to decode NFS packets. We can use it to find out error code from server replay message of compounds.

Feedback

Please send feedback you might have to nfsfeed@microsoft.com


How to Perform Client Fencing?

$
0
0

In this article, we will talk about how we evaluate client fencing for Server for NFS role in Windows Server 2012, from targeting client permission to calculating combined permission result. We will also cover the caching strategy we implemented and lookup mechanism for netgroups. Lookup data are located in local cache, Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP) store, Active Directory Domain Services (AD DS), Active Directory Light Weight Directory Services (AD LDS) or any third party LDAP store.  

Client Fencing Setup

A Server for NFS share is a share that is exported by the Windows Server 2012. It contains information like share name and share path. Here is a sample of an NFS share named “share” by running PowerShell cmdlet Get-NfsShare:

Get-NfsShare share

The result may look like below (Figure 1):

Figure 1

Client fencing is a mechanism which authorizes access rights of a machine (we call it a host or remote host) or a group of machines for this share. Server for NFS supports two types of groups:

  • clientgroup: Set of client machines as a group which is configured locally on the NFS server, and they all have similar permissions.
  • netgroup: Set of client machines as a group configured on NIS or LDAP, makes configuration accessible across servers and platforms within same subnet. A netgroup is identified by its full qualified domain name (FQDN).

Client fencing is evaluated by means of calculating set of client permissions for this particular share. A particular client permission defines the read/write access right of a client (a host or a group of hosts). Moreover, it contains information about whether users of client can act as a Linux root user, i.e. be assigned administrator privilege (controlled by AllowRootAccess field) and what is the language encoding of this client. PowerShell cmdlet Get-NfsSharePermission shows the share permission settings of a share:

Get-NfsSharePermission share

The result may look like below (Figure 2).

Figure 2

In the example above, share named “share” is configured with

  • an individual host permission (machine with IP 10.123.180.162) to allow read/write permission and root access,
  • a clientgroup permission (for client group “ClientGroup”) to allow read/write permission,
  • a netgroup permission (for netgroup “Group1”) to allow only read permission,
  • and deny all other machine access by configures “All Machines” permission to deny access.

Windows NFS server provides several tools to configure client fencing. In this document, we will demonstrate three of them: NFS management UI, PowerShell cmdlet, and NFS command line tools.  This can best be accomplished by way of a simple example. In the following section, we will configure the “read” permission of a share following infrastructure scenario:

  • Client (FQDN: nfsclient.contoso.com)
  • Windows server 2012 running Server for NFS (FQDN: nfsserver.contoso.com)
  • Server for NFS share (export path: “NFSSERVER:/share”)

NFS management UI

From server manager on NFS server choose share “share” from a list of shares from “File and Storage Services” tab, then right click it and select properties to open share Properties. Then choose Share Permissions and click Add button. A pop-up window will show up as in Figure 3 below:

Figure 3

We select the Host radio button and type in the host name (or IP address) of the client and then select Read Only share permission. Click Add button to add permission for this share. Note that configuring netgroup or clientgroup instead of hosts follows the same UI. Just select “Netgroup” or “Client group” radio button instead of Host to add permission for Netgroup or client groups.

PowerShell cmdlet

Server for NFS implemented a collection of PowerShell cmdlets to manage Server for NFS shares.

  • Grant-NfsSharePermission: Grant a particular permission to a Server for NFS share
  • Get-NfsSharePermission: Retrieve the permissions that have been configured for a given Server for NFS share
  • Revoke-NfsSharePermission: Revoke existing permissions that have been granted to a Server for NFS share

Server for NFS PowerShell cmdlet should be loaded when Server for NFS role is enabled. Type the following command to grant read permission for the client:

Grant-NfsSharePermission –Name share –Permission readonly –ClientName nfsclient.contoso.com –ClientType Host

NFS command line tools

NFS command line tools use existing WMIv2 provider that ships with Windows server 2012. We can use nfsshare.exe executable to manage Server for NFS shares. For granting share permission, run following command from command prompt:

nfsshare share –o ro=nfsclient.contoso.com

More options can be set to list current share permissions as well as revoke previous assigned permissions. Details of this tool can be found using the URL: http://msdn.microsoft.com/en-us/library/cc753529.

Client Fencing Evaluation

When a host sends a request to a Server for NFS share, Windows Server 2012 will evaluate the client permissions of this host on this share, following steps shown in Figure 4.

Figure 4

Server will first check whether client individual host permission of this share includes this incoming host. This task parses Server for NFS share’s individual host permission list and looks for a match. If it succeeds, the matching individual host permission is returned. Otherwise, server will check clientgroup permission configured for this Server for NFS share. Given IP address of this host, server looks through local client group list and finds any client group this host belongs to.  Permissions from all hit clientgroups are combined and then returned.  Note that, we only support IPv4 address as input of clientgroup lookup. All IPv6 addresses are treated as not match. If neither individual host permission nor clientgroup finds a match, we give it another try by searching all netgroups this host belongs to and comparing with netgroup list of this share. For performance consideration, we cache netgroup per host data to avoid frequently downloading netgroup data from NIS or LDAP. I will talk about our caching strategy in latter section. Given a list of netgroups this host belongs to, we query existing setup for any of them in this share and return the permission of the first matching netgroup. If none of previous steps finds an eligible permission setting, we return the global permission of this NFS share as the permission of this host.

Caching Strategy

In Server for NFS implementation, we cache netgroup per host for the purpose of looking up netgroup permissions for a particular remote host machine. It is a first-in-first-out (FIFO) queue of remote hosts and each host maintains a list of netgroups this host belongs to.

When querying netgroup permission for a given host, we first look through the netgroup per host cache. During cache miss (or) cache expiration conditions, the cache is updated by querying and downloading data from NIS or LDAP We keep netgroup per host cache up-to-date by setting a creation time stamp and an expiration time for it. This expiration time can be configured through PowerShell cmdlet. Here is an example to configure cache timeout to be 30 seconds.

 

Set-NfsServerConfiguration –NetgroupCacheTimeoutSec 30

Format Issue for Netgroup Lookup

Before doing the query for netgroup, the IP address of the host we are looking for will be translated into FQDN via reverse DNS lookup. If it fails to find one, the lookup mechanism will look for the corresponding entry with KEY as "*.*" for wildcard group. We then query NIS or LDAP for netgroups given FQDN. Depending different NIS Configuration (domain field in /etc/netgroup), the format of netgroup by host data might be the one of the followings:

 

server.contoso                                 domain1 domain2

server.contoso.*                               domain1 domain2

We will try both to see if it matches.

Feedback

Please send feedback you might have to nfsfeed@microsoft.com

iSCSI Target Storage (VDS/VSS) Provider

$
0
0

There are two new features introduced in Windows Server 2012 related to iSCSI: iSCSI Target Server and Storage Provider. My previous blog post discussed the iSCSI Target Server in depth, this blog post will focus on the providers.

Introduction

There are two providers included in the storage provider feature:

· iSCSI Target VDS hardware provider: Enables you to manage iSCSI virtual disks using older applications that require a Virtual Disk Service (VDS) hardware provider, such as the Diskraid command.

· iSCSI Target VSS hardware provider: Enables applications, such as a backup application that are connected to an iSCSI target to create volume shadow copies of data on iSCSI virtual disks.

Note: VDS and VSS utilize different provider models. iSCSI Target storage providers follows the hardware provider model, hence the naming convention.

Overview

VSS hardware provider

The VSS (Volume Shadow Copy Service) is a framework that allows a volume backup to be performed while the application continues with IOs. VSS coordinates among the backup application (requester), application (such as SQL) (writers) and the storage system (provider) to complete the application consistent snapshot. A detailed concept of VSS is explained here.

The iSCSI Target VSS hardware provider communicates with the iSCSI Target server during the VSS snapshot process; thus ensuring the snapshot to be application consistent. The diagram below illustrates the relationship between the VSS components.

image

One feature added in Windows Server 2012 for the iSCSI Target VSS hardware provider is the support of auto-recovery. Auto-recovery allows the VSS writer to modify the snapshot in the post-snapshot phase. This requires the provider to support write operation in the post-snapshot window, before making the snapshot read-only. This feature is also required by the Hyper-V host writer. With auto-recovery support, you can now run Hyper-V host backup against iSCSI Target Storage.

VDS hardware provider

The VDS (Virtual Disk Service) manages a wide range of storage configuration. In the context with iSCSI Target, it allows storage management applications to manage iSCSI Target Server. For the service architecture, this page provides more the details, along with the roles of the providers. The diagram below illustrates the relationships between the VDS components.

image

As you may have read in the VDS overview for Windows Server 2012, the VDS service is superseded by the Storage Management APIs, the VDS hardware provider is included for backward compatibility. If you have storage management applications requires VDS, you will be able to continue to run the application. For new application development however, it is recommended to use Windows Storage Management APIs. Note, iSCSI Target in Server 2012 only support VDS.

To use the VDS hardware provider to manage iSCSI Target Server, you must install the VDS provider on the storage management server. You also need to configure the provider so that, it knows which iSCSI Target Server to manage. To do so, you can use the powershell cmdlet below to add the server:

PS C:\> $PrvdSubsystemPath = New-Object System.Management.ManagementPath("root\wmi:WT_iSCSIStorageSubsystem")

PS C:\> $PrvdSubsystemClass = New-Object System.Management.ManagementClass($PrvdSubsystemPath)

PS C:\> $PrvdSubsystemClass.AddStorageSubsystem("<remote-machine>")

Installation

The iSCSI Target storage providers are typically installed on the application server, as illustrated by the diagram below:

image

Windows Server 2012

To install the storage providers on Windows Server 2012, use Server Manager, you can run Add roles and features wizard, and then select the iSCSI Target Storage Provider (VDS/VSS hardware provider)

clip_image008

Alternatively, you can also enable it from the cmdlet

PS C:\> Add-WindowsFeature iSCSITarget-VSS-VDS

Down Level Operating System support

As you can see from the diagram above, the iSCSI storage providers are typically installed on a different server from the server running iSCSI Target Server. If iSCSI Target Server is running on Windows Server 2012, and the application server is running a previously-released Windows operating system, you will need to download and install the down-level storage providers. The download package is available on the Download Center at http://www.microsoft.com/en-us/download/details.aspx?id=34759

There are three files to choose from:

· 64 bit package that runs on Windows Server 2008 (Windows6.0-KB2652137-x64.msu)

· 32 bit package that runs on Windows Server 2008 (Windows6.0-KB2652137-x86.msu)

· 64 bit package that runs on Windows Server 2008 R2 (Windows6.1-KB2652137-x64.msu)

If you have any application server running on Windows Server 2008 or R2, connected to Server 2012 iSCSI Target, you will need to download the appropriate package, and install it on the application server, and configure the credentials as described in the Credential configuration section. You can simply follow the installation wizard.

Storage Provider support matrix

To complete the picture of storage provider and iSCSI Target version support, see the table below:

clip_image009

iSCSI Target 3.2 <-> installed on Windows Storage Server 2008

iSCSI Target 3.3 <-> installed on Windows Storage Server 2008 R2 and Windows Server 2008 R2

iSCSI Target (build-in) <-> included with Windows Server 2012

Note:

1: Storage provider 3.3 on Server 2012 can manage iSCSI Target 3.2. This has been tested.

2: The Windows Server 2012 down-level storage provider can be downloaded from: : http://www.microsoft.com/en-us/download/details.aspx?id=34759

Credential configuration

If you have used the storage providers prior to the Windows Server 2012 release, there are a few differences in this release to consider:

1. The interface between the storage providers and the iSCSI Target service has changed from private DCOM to WMI, therefore the storage providers shipped previously cannot connect to iSCSI Target Server in Server 2012. See the support matrix to check the version of the storage provider you may need.

2. The storage providers require credential configuration after being enabled.

The storage providers must be configured to run with the administrative credentials of the iSCSI Target Server computer, otherwise, you will run into “Unexpected Provider” error (0x8004230F) when taking any snapshot. Along with the error, you will also find the following error message in the Windows Event eventlog:

Volume Shadow Copy Service error: Error creating the Shadow Copy Provider COM class with CLSID {463948d2-035d-4d1d-9bfc-473fece07dab} [0x80070005, Access is denied.].

Operation:

   Creating instance of hardware provider

   Obtain a callable interface for this provider

   List interfaces for all providers supporting this context

   Query Shadow Copies

Context:

   Provider ID: {3f900f90-00e9-440e-873a-96ca5eb079e5}

   Provider ID: {3f900f90-00e9-440e-873a-96ca5eb079e5}

   Class ID: {463948d2-035d-4d1d-9bfc-473fece07dab}

   Snapshot Context: -1

   Snapshot Context: -1

   Execution Context: Coordinator

What credential to use?

For the storage providers to remotely communicate with the iSCSI Target service, they require the local Administrator’s permission of the iSCSI Target Server computer. This may also be a domain user added to the local admin’s group on the iSCSI Target Server.

To find the account with local Administrator’s permission, you can open the computer management, and click on the Administrator’s group:

clip_image010

For non-domain-joined servers, use a mirrored local account, i.e. create a local account with the same user name and password on both the iSCSI Target Server and the application server. Make sure the account is in the Administrator’s group on both servers.

UI Configuration (Dcom Config)

You can also use DCOM Config to configure the credentials as follows:

1. Open Component Services, open Computers, open My Computer and then open DCOM Config.

2. Locate 'WTVdsProv' and configure credentials as appropriate

3. Locate 'WTSnapshotProvider and configure credentials as appropriate

Take the WTSnapshotProvider for example:

1. Locate the provider under DCOM Config container

clip_image011

2. Right click on the provider, and click Properties

clip_image012

3. Click the Identity tab, select the This user option, then specify an account which has the iSCSI Target Server local Administrator’s permission

clip_image013

Cmdlet Configuration

As an alternative, you can also use powershell cmdlet to configure the credentials:

PS C:\> $PsCred = Get-Credential

PS C:\> $PrvdIdentityPath = New-Object System.Management.ManagementPath("root\wmi:WT_iSCSIStorageProviderIdentity")

PS C:\> $PrvdIdentityClass = New-Object System.Management.ManagementClass($PrvdIdentityPath)

PS C:\> $PrvdIdentityClass.SetProviderIdentity("{88155B26-CE61-42FB-AF31-E024897ADEBF}",$PsCred.UserName,$PsCred.GetNetworkCredential().Password)

PS C:\> $PrvdIdentityClass.SetProviderIdentity("{9D884A48-0FB0-4833-AB70-A19405D58616}",$PsCred.UserName,$PsCred.GetNetworkCredential().Password)

Credential verification

After you have configured the credentials, to verify, you can try to take a snapshot using diskshadow.exe.

Open a commandline prompt, and type diskshadow

Follow the prompt and type

Add volume c:

Create

If the credential is not configured correctly, it will show the following error:

clip_image014

Note: Remember to change the credential if the password has changed on the iSCSI Target Server for the specified account.

Conclusion

I hope this helps you get started using iSCSI Target in Windows Server 2012, or make a smoother transition from the previous user experience. If you have questions not covered, please raise it in the comments, so I can address it with upcoming postings.

VMware Setup for Server for NFS Cluster

$
0
0

In this tutorial, I’ll discuss how to setup VMware vSphere virtual machines with Windows server 2012 Server for NFS cluster as backend storage.

Background

VMware has supported running virtual machines hosted by its ESX server over the NFS v3 protocol. In this configuration, virtual machines running on VMware ESX have their VMDKs (virtual disks) located on a remote file share that is exported over the NFS protocol (Figure 1).

Figure 1

In this tutorial, I will cover two concepts: Windows server 2012 Server for NFS cluster setup and configuring VMware vSphere with Server for NFS cluster as storage. Explaining how to configure vSphere virtual machines with Server for NFS cluster can best be accomplished by way of a simple example. In this tutorial we'll consider the following infrastructure scenario:

  • Windows server cluster nodes running Server for NFS: zzqclustertest1, zzqclustertest2
  • Windows server storage server running iSCSI target server: zzqclusterstor
  • Mount point for iSCSI server: E:\
  • Windows cluster role network name: nfsclusterap
  • Name of Server for NFS share on the cluster: /share
  • vSphere vCenter IP address: 172.30.182.110

A volume is created over iSCSI target server and mounted on vSphere VM for the purpose of cluster-shared storage. Information about iSCSI target server and how to manage it can be found here: http://technet.microsoft.com/en-us/library/cc726015. Setting up vSphere vCenter is out of scope of this document. Please refer following article for best practices for running vSphere on NFS storage: http://www.vmware.com/resources/techresources/10096

For the purpose of configuration, we assume we have already setup the vSphere server with server version 5.0. Windows server cluster node is running Windows server 2012 with Server for NFS role installed. Windows server storage server is running Windows server 2012 with iSCSI target server role installed. Two cluster nodes construct a failover cluster with share storage provided by iSCSI target server.

Server for NFS Cluster Setup

Creating a cluster itself is beyond the scope of this tutorial. Please refer to blog post Creating a Windows Server 2012 Failover Cluster.

Let’s assume we already established the cluster (named nfsclusterap in our case), now pick one of cluster node (in our case zzqclustertest1) and open Failover Cluster Manager. Then connect to our cluster (Figure 2)


Figure 2

If this cluster is found, failover cluster manager will show the details of this cluster. We can use both UI and PowerShell Cmdlet to create a Server for NFS share for this cluster. Note that the share should be created under “Shares” folder of shared storage mount point of the cluster (In our case, E:\Shares\). To allow vSphere mount this share, make sure that following options have been chosen:

  • Allow root access
  • Enable unmapped access
  • Set permission of machine running Server for NFS role to readwrite

Details of how to create a share on Windows server 2012 Server for NFS cluster can be found from blog post “Server for Network File System First Share End-to-End” at http://blogs.technet.com/b/filecab/archive/2012/10/08/server-for-network-file-system-first-share-end-to-end.aspx. New share wizard UI must be triggered from clicking “Add File Share” button in roles view. (Figure 3)

Figure 3

We also need to add NFS client to the outgoing firewall exceptions on the ESX server to allow.

VMware vSphere Configuration

First go to your vSphere vCenter (in our case: https://172.30.182.110/) and download the “vSphere Client” (Figure 4). 


Figure 4

Once you download it and install, open up the application and connect to the vCenter through its IP address or host name (in our case: vSphereHost). User/Password authentication is required. (Figure 5) Use the ESX server credentials for User/Password authentication.


Figure 5

Now we are going to add our Server for NFS share set up in the previous step as a data store for vSphere. Just select “Configuration” tab from horizontal bar and click “Storage” on the vertical bar. (Figure 6) 


Figure 6

Now let’s create the data storage. Click “Add Storage…” button on the up right corner of the UI and select Network File System as storage type. (Figure 7)


Figure 7

After that, we choose the name of our Server for NFS server cluster and the share we created before (in our case, server cluster name is “nfsclusterap” and the share is “/share”). Then we pick a name for that storage and create it. (Figure 8)


Figure 8

Now we can create a virtual machine from vSphere “Create New Virtual Machine” wizard. More details of that can be found from vSphere manual from vSphere Document Center:

http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc_50%2FGUID-55238059-912E-411F-A0E9-A7A536972A91.html

 

Feedback

 

Please send feedback you might have to nfsfeed@microsoft.com

 

NFS Identity Mapping in Windows Server 2012

$
0
0
This document describes the selection, configuration and usage of the user and group identity mapping options available to Client for NFS available in selected versions of Windows 8 and to Server for NFS and Client for NFS available in selected versions of in Windows Server 2012 to assist an systems administrator when installing and configuring the NFS components within Windows 8 and Windows Server 2012.  It describes the available mapping mechanisms and provides guidance on which of those mechanisms to use in common scenarios. Here’s a summary of the items on this post:
  • ID & ID Mapping
  • ID Mapping with RPCSEC_GSS
  • ID Mapping Mechanisms
  • Setting a Mapping for an Identity
  • How to Select a Mapping Method
  • Troubleshooting

Identity and Identity Mapping

The methods Windows and NFS use to represent user and group identities are different and are not necessarily directly interchangeable. Identity Mapping is the process of converting from an NFS identity representation to a Windows representation and vice-versa. The following sections briefly describe some representations of identity and then how they are used by the NFS authentication methods.

Identity Representations

Windows

Windows uses a Security Identifier (SID) to represent an account. This applies to both user and group accounts. A SID can be converted to an account name and vice-versa directly.

NFS

The representation used by NFS can take many forms depending upon the authentication method and the protocol version.

The most widely used method is to represent an identity using a 32bit unsigned integer, for both users (UID) and groups (GID). This method can be used both by NFS V3 and NFS V4.1. Client for NFS and Server for NFS can convert to or from these identities and a Windows account using a mapping store which is populated with suitable mapping information.

For NFS version V4.1, user and group identities can take the form of “account@dns_domain” or “numeric_id” where the numeric id is a string form of a UID or GID 32bit unsigned integer expressed as a decimal number (See RFC 5661 - http://tools.ietf.org/html/rfc5661, section 5.9). For the “account@dns_domain” format, Server for NFS can use this form of identity directly without any mapping. For the “numeric_id” format, Server for NFS uses the configured mapping store to convert this to a Windows account.  Client for NFS does not support NFS V4.1 in Windows 8 or Windows Server 2012.

For both NFS V3 and V4.1, identities can also be encoded in a Kerberos ticket. Although the accessing account can be accurately represented and retrieved from the ticket, this form of identity is only used for authentication of requests and not as a general representation of an identity. So for example, for a READDIR request, the identity of the account making the request may well be encoded as part of the Kerberos mechanism to authenticate the request. However, the ownership of the objects in the reply will make use of UID, GID or “account@dns_domain” depending on the protocol and mapping information.

 

Authentication Methods

NFS protocols allow for several different authentication mechanisms. The most commonly encountered, and those supported by the Windows Server 2012 Server for NFS are

  • AUTH_NONE
  • AUTH_SYS (also known as AUTH_UNIX)
  • RPCSEC_GSS

The AUTH_NONE mechanism is an anonymous method of authentication and has no means of identifying either user or group. Server for NFS will treat all accesses using AUTH_NONE as anonymous access attempts which may or may not succeed depending upon whether the export is configured to allow them.

The AUTH_SYS mechanism is the most commonly used method and involves identifying both the user and the group by means of a 32bit unsigned integers known as UID and GID respectively. Special meaning is attached to a UID value of ‘0’ (zero) and is used to indicate the “root” superuser.

The RPCSEC_GSS mechanism is a Kerberos V5 based protocol which uses Kerberos credentials to identify the user. It provides several levels of protection to the connection between an NFS client and an NFS server, namely

  • RPC_GSS_SVC_NONE where the request identifies the user, and sessions between the client and server are mutually authenticated. This identification is not based on UIDs and GIDs as provided by AUTH_SYS.
  • RPC_GSS_SVC_INTEGRITY where not only the client and server mutually authenticated, but the messages have their integrity validated.
  • RPC_GSS_SVC_PRIVACY where not only are the client and server mutually authenticated, but the message integrity is enforced and the message payloads are encrypted.

This paper is only concerned with identity and identity mapping. For further details on how to use RPCSEC_GSS with the Windows Server 2012 Server for NFS see “NFS Kerberos Configuration with Linux Client”.

Identity Mapping with RPCSEC_GSS

When using RPCSEC_GSS to provide authentication, the Windows form of the identity of the user making the request can be obtained directly from the information in the request itself. And for some NFS operations that is sufficient. However for NFS V3 based accesses, the NFS protocol itself along with the companion NLM and NSM protocols makes explicit use of UID and GID values in requests (SETATTR), the explicit body of the replies (e.g. READDIR) and in the post-op attributes in replies to many requests. For example, when processing a GETATTR request, the reply contains the UID and GID for the object, so the Windows Server for NFS needs to convert the Windows style identity associated with the file from the file system and convert it to a UID/GID pair to send back to the client. Similarly, for NFS V4.1 based access, the protocol uses “account@dns_domain” or “numeric_id” strings ass account identifiers. So although the use of RPCSEC_GSS provides for better security on the connection between the NFS client and server, it does not replace the need for identity mapping.

Identity Mapping Mechanisms

In order to use the UID and GID values used in NFS requests, they need to be converted, or mapped, to identities that the underlying Windows platform can use. The Microsoft Server for NFS and Client for NFS provide several options to map identities from NFS requests each of which have a set of advantages and disadvantages

  • Active Directory

Best used where established procedures are in use to manage user accounts, where there are many machines using a common set of users and groups and/or configurations where common files are shared using both NFS and SMB protocols (SMB is the standard Windows file sharing protocol)

  • Active Directory Lightweight Directory Services (AD LDS) or other RFC 2307 compliant identity store

Best used where centralized management of machine local accounts is being used and identity mapping for multiple non-domain joined machines is required.

  • Username Mapping Protocol store (MS-UNMP)

Legacy (deprecated) mapping solution available as a feature within Windows Server 2003 R2 and the Services for UNIX product. The mapping server itself is no longer supplied but Client for NFS and Server for NFS can be configured to use an existing mapping server. Information on the configuration and use of UNMP based mapping solutions can be found in the Microsoft TechNet article “User Name Mapping and Services for UNIX NFS Support” at http://technet.microsoft.com/en-us/library/bb463218.aspx.

  • Local passwd and group files

Best used for standalone Client for NFS or standalone Server for NFS configurations where file sharing is performed using both NFS and SMB, and Windows domains are not readily available. Can be used for domain joined machines if required.

  • Unmapped UNIX Username Access (UUUA) (applies to Server for NFS using AUTH_SYS only).

Best used for standalone Server for NFS configurations where there are no files being shared by both NFS and SMB and where little to no management of Windows identities is required. Can also be used for domain joined servers if files made available via an NFS export are only going to be accessed by  Server for NFS.

There are a number of tools which are involved in managing this mapping information. They include

  • Server Manager UI.
  • Services for Network File System (NFS).
  • Server for NFS PowerShell cmdlets.
  • Command line utility nfsadmin (superseded by Server for NFS PowerShell cmdlets).
  • Standard Windows domain account management and scripting tools.

PowerShell Cmdlets

As part of Windows Server 2012, the Server for NFS sub-role has introduced a collection of cmdlets, several of which are used to manage the identity mapping information used by NFS. The cmdlets used to manage identity mapping include

  • Set-NfsMappingStore
  • Get-NfsMappingStore
  • Install-NfsMappingStore
  • Test-NfsMappingStore
  • Set-NfsMappedIdentity
  • Get-NfsMappedIdentity
  • New-NfsMappedIdentity
  • Remove-NfsMappedIdentity
  • Resolve-NfsMappedIdentity
  • Test-NfsMappedIdentity

 

Active Directory

This mechanism is only available to domain joined machines, both clients and servers and provides for common identities across a large number of machines and where files can be accessed by both NFS and SMB file sharing protocols.

The mechanism makes use of the Active Directory schema updates to include the “uidNumber” and “gidNumber” attributes to user and group accounts for domains running at a functional level of Windows Server 2003 R2 or higher.

Since these are standard fields in the account records any standard management tools and scripting methods can be used to manipulate these fields.

The schema for account records in domains running at a functional level of Windows Server 2003 R2 or higher includes the fields “uidNumber” and “gidNumber” for user accounts and “gidNumber” for group accounts. If these fields are defined then the NFS client and server will automatically use the values as the UID and GID fields in NFS request operations and map those values to the associated Windows user and group accounts. Note that in user records, the assigned UID number must be unique for each user account, and similarly, for group account, the assigned gidNumber must be unique across all group accounts. Multiple user records can have the same value for gidNumber. If the PowerShell cmdlets are used to set mapping information for an account then the cmdlets will ensure there are no duplicate UIDs or GIDs. If other methods are used then the administrator should take care to ensure there is no improper duplication.

Managing the mapping information will require domain level administrator privileges, namely those required to manage account attributes.

To set the machine to use domain based mapping a PowerShell command can be used

 

Set-NfsMappingStore -EnableADLookup $true

 

or the Server Manager can be used. This starts the “Services for Network File System” window, and right-clicking on the “Services for NFS” node the properties dialog can be activated.


To enable Active Directory based mapping, activate the Active Directory mapping source.

 

Active Directory Lightweight Directory Services (AD LDS) or Other RFC 2307 Compliant Identity Store.

This mechanism can be used with both domain and non-domain joined machines where the source of identity maps is stored in an RFC 2307 compliant store accessed via LDAP requests. This provides for a method of managing user identities and mapping information where access to files is going to be shared by non-NFS applications or file sharing methods, and either centralized management is required or preferred and there are too many machines to manage individually using local passwd and group files.

A typical configuration would be where a number of Windows machines running Client for NFS and/or Server for NFS are arranged as a group of machines which share a set of common non-domain based identities. Using AD LDS these can be managed as a single set of identities, much like Active Directory, but without the need for a domain. This makes it simpler to manage identities than using local passwd and group files for any changes to identities and their mappings, since there is just a single location to manage rather than multiple sets of passwd and group files to maintain.

The mechanism makes use of the RFC2307 schema for accounts where the uidNumber and gidNumber attributes are used to manage the user and group identity maps respectively.

Managing the mapping information will require the privileges required to manage user and group accounts and their attributes. The specific privileges required will depend on the solution used.

Note that although AD LDS can be used in a domain environment, there is little advantage in doing so and using the normal Active Directory mapping mechanism will probably prove to be easier to manage. However, using an AD LDS mapping store for domain joined machines can be useful in configurations where the central domain cannot be used as a mapping store for some reason. For example only a limited number of domain accounts require a mapping to be set and the central domain would require elevated permissions to modify the domain accounts directly (i.e. the administrator managing the NFS identity mappings is not the same as the domain administrator). Also, the account name cannot have a “domain\” prefix and so the name must make sense on the machine using the mapping. In practical terms this means that a non-domain joined machine must have a matching machine local account and a domain joined machine must have a matching domain account.

To install Active Directory Lightweight Directory Services, a PowerShell command can be used

 

Install-NfsMappingStore -InstanceName NfsAdLdsInstance

 

This command will install and configure an AD LDS instance for use by NFS. This can be located on any Windows Server 2012 machine and need not be co-located with any Windows NFS client or server. When the command completes, if successful it will display output similar to the following

 

Successfully created ADLDS instance named NfsAdLdsInstance
on server NFS-SERVER, the instance is running on port 389 and the partition is
CN=nfs,DC=nfs.

 

To set the NFS client or server to use AD LDS based mapping, the following PowerShell command can be used

 

Set-NfsMappingStore -EnableLdapLookup
$true -LdapNamingContext "CN=nfs,DC=nfs" -LdapServer localhost:389

 

Note that the “LdapNamingContext”  should be set to the value returned as the partition when the AD LDS instance was created. The “LdapServer” should be set to the machine name and port which to be used to contact the AD LDS instance.

Alternatively the Server Manager  can be used to set the NFS client or server to use AD LDS based mapping.

 

Username Mapping Protocol Server.

This is a deprecated method of obtaining mapping information but may still be in use in existing environments. The UNMP Server was a feature in the separately installed Services for UNIX product, and in the Services for NFS feature of Windows Server 2003 R2 release.

The UNMP server provided a source of UID/GID to Windows account mappings which could be used by domain joined machines running Client for NFS and/or Server for NFS. This feature has been largely superseded by the use of Active Directory which provides for better management and scaling.

 

Local PASSWD and GROUP files.

In simple configurations where mapping between UID/GID and Windows accounts is still required, the mapping information can be provided in UNIX style passwd and group files. These have the same fields and format as conventional UNIX passwd and group files with the exception that the account name can optionally make use of the standard Windows account names <domain>\<name>, where the "<domain>\" portion is optional and if absent, the “name” portion indicates a domain account for domain joined machines, or a machine local account for non-domain joined machines. If the machine is domain joined and the account to be mapped is a machine local account, the “domain” portion should be set to either “localhost” or to the name of the machine.

The use of local passwd and group files is enabled by placing both files in the %SystemRoot%\system32\drivers\etc directory. That is, the local files mapping feature is enabled if both the following files exist

  • %SystemRoot%\system32\drivers\etc\passwd
  • %SystemRoot%\system32\drivers\etc\group

 

This mapping method creates an independent mapping store for each machine and is typically used for

  • domain joined machines where a limited number of machines are making use of NFS
  • for standalone machines where a simple identity mapping mechanism is preferred, for example a single workstation accessing existing UNIX NFS servers
  • a set of UNIX workstations accessing a standalone Windows Server for NFS.

Managing the mapping information will require the privileges needed to create and modify the passwd and group files in the %SystemRoot%\system32\drivers\etc directory. By default the members of the “BUILTIN\Administrators” group have sufficient privileges. It is recommended that these privilege requirements are not changed without a clear understanding of the consequences.

Note that by default, files created in the %SystemRoot%\system32\drivers\etc directory will be readable by all members of the “BUILTIN\Users” group for the computer. If this is considered to be too great a degree of information disclosure then access can be restricted by adding read access permissions for the virtual accounts for the NFS services “NT Service\NfsService” and “NT Service\NfsClnt” to both the passwd and group files and then removing access permissions for the “BUILTIN\Users” group. This can be achieved as follows

From a CMD or PowerShell prompt

 

icacls group /inheritance:d /grant "NT
SERVICE\NfsService:RX" /grant "NT SERVICE\NfsClnt:RX"

 

icacls group /remove BUILTIN\Users

 

icacls passwd /inheritance:d /grant "NT SERVICE\NfsService:RX" /grant "NT SERVICE\NfsClnt:RX"

 

icacls passwd /remove BUILTIN\Users

 

Or, via the Properties dialog Security tab for both the passwd and group files.

To verify that the server is using file based mapping, the “Event Viewer” utility can be used to examine the ServicesForNfs-Server\IdentityMapping channel where the server will write messages to indicate the status of the mapping files.

 

Unmapped UNIX User Access (UUUA)

The UUUA identity mapping mechanism is only available to Server for NFS and can only be used when the AUTH_SYS authentication method is being used.

In situations where there is no requirement to share files accessed by NFS with any other sharing mechanism (e.g. SMB) or local application, then Server for NFS can be configured to directly use the supplied UID/GID identifiers and attach them to files in such a way that the identity information is preserved and is available to an NFS client, but no mapping to any Windows account is required. This is particularly useful for turn-key installations where very little administration is required to set up Server for NFS.

Server for NFS does this by recording the UNIX style UID, GID and mode information in the Windows file system security fields directly[1]. However, a consequence of this is that access to those files by other Windows applications can be problematic since the security information does not identify any Windows account and so standard Windows access mechanisms are not available.

As the methods used by Server for NFS to capture the UID, GID and mode information result in the generation of valid security descriptor, there should be no impact for backup applications provided those applications just copy the data and do not try to interpret or manipulate it in any way.

This method is typically used for standalone Windows Server for NFS installations where little to no configuration is required, such as a turnkey Windows Server 2012 Server for NFS where the only administration required is the creation of the NFS exports. It should be considered a convenience mechanism only as it provides no security (a consequence of the AUTH_SYS authentication method) and is effectively equivalent to access by an anonymous Windows user. The behavior is similar to many standard UNIX NFS server implementations.

No privileges are required as there are no mappings to administer.

Setting a Mapping for an Identity

Active Directory and Active Directory Lightweight Directory Services

As account objects are standard Windows Active Directory objects, any of the standard tools or scripting methods can be used. The account attributes used are “uidNumber” and “gidNumber” for user account type and “gidNumber” for group account types.

These fields can be manipulated several utilities shipped with Windows Server 2012.

The recommended method is to use the Server for NFS PowerShell cmdlets. These cmdlets can be used to query mappings for one or more existing accounts, modify mappings, test mappings and even create new accounts with mappings as a single operation. One of the advantages of using the PowerShell cmdlets to set mapping information is that they help ensure there are no duplicate UIDs or GIDs. Note that the following examples assume that an Active Directory or AD LDS mapping store has already been configured.

To query the mapping for an existing account

 

Get-NfsMappedIdentity
-AccountName root -AccountType User

 

Or to bulk query all the group accounts

 

Get-NfsMappedIdentity -AccountType Group

 

A bulk query for all the user accounts is performed in a similar manner, except that the AccountType is set to User. Simple wildcarding of account names can also be used, for example the following will return all the user accounts with names beginning with the prefix “nfs”.

 

Get-NfsMappedIdentity -AccountType Group –AccountName nfs*

 

To set a mapping for an existing user account

 

Set-NfsMappedIdentity -UserName nfsuser14 -UserIdentifier 5014 -GroupIdentifier 4000

 

Or to set the mapping for an existing group account

 

Set-NfsMappedIdentity -GroupName specgroup -GroupIdentifier 500

 

 

To create a set of new accounts and with their AUTH_SYS UID/GID mappings

 

$secureString = ConvertTo-SecureString "password"
-AsPlainText –Force

 

New-NfsMappedIdentity -GroupIdentifier    0 -GroupName rootgroup

 

New-NfsMappedIdentity -GroupIdentifier 4000
-GroupName nfsusers

 

New-NfsMappedIdentity -GroupIdentifier    0 -UserName root -UserIdentifier 0    -Password $secureString

 

New-NfsMappedIdentity -GroupIdentifier 4000
-UserName nfsuser1 -UserIdentifier 5001 -Password $secureString

 

New-NfsMappedIdentity -GroupIdentifier 4000 -UserName
nfsuser2 -UserIdentifier 5002 -Password $secureString

 

New-NfsMappedIdentity -GroupIdentifier 4000
-UserName nfsuser3 -UserIdentifier 5003 -Password $secureString

 

New-NfsMappedIdentity -GroupIdentifier 4000
-UserName nfsuser4 -UserIdentifier 5004 -Password $secureString

 

 

An alternative and more  basic method is to use “adsiedit.msc” to manipulate the Active Directory objects directly. However there are few if any safeguards and extreme caution should be used for this method. This is not the preferred method of setting a mapping.

 

Username Mapping Protocol Server

Refer to the Windows Server 2003 R2 documentation ([NFSAUTH] Russel, C., "NFS Authentication", http://www.microsoft.com/technet/interopmigration/unix/sfu/nfsauth.mspx) for configuring mapping information for the identities being used.

Local PASSWD and GROUP files

As these are standard ANSI text files, any ANSI text editor can be used. The file format is the standard UNIX equivalents and the only active fields are the username, uid, and gid for the passwd file and the group name, gid and group list for the group file.

Note that some of the PowerShell cmdlets can get used to query and test identity mappings set this way, but attempts to set or modify local file based mappings with the PowerShell cmdlets will fail. Note the following example assume that the local file-based mapping store has already been configured.

For examples, to query the current mapping for a user account “root”

Get-NfsMappedIdentity -AccountName root -AccountType User

 

Or to query for the account name with the UID value of 500 

Get-NfsMappedIdentity -AccountType User -UserIdentifier 500

 

Bulk queries to fetch all the mappings in a single command can also be used but the wildcarding options available with the LDAP based mapping stores cannot be used directly but any standard PowerShell pipe based filters can be used as an alternative.

How to Select a Mapping Method

Generally there is no single mapping solution that is “correct” for any set of circumstances. Instead, many of the mechanism can be used based on a set of tradeoffs leading to a prioritized list drawn up from the available methods.

Considerations

  • Is the machine Windows domain joined?
  • For servers, is file access going to be shared by both NFS and non-NFS methods (e.g. files also accessed via SMB shares, or other local applications)?
  • How many Windows machines are making use of NFS services (both client and server)?
  • How many individual users and groups are involved on the Windows machines making use of NFS services?
  • Security
    • Access control – Which NFS authentication protocol is in use? For example, RPCSEC_GSS implies a centrally managed account store and so an identity mapping store would be need to map the same accounts. Using the same store would remove the need for synchronization between the stores that would exist if an alternate mapping method were used.
    • Auditing – is an account identity required to monitor access?
  • NFS authentication method(s) used (e.g. AUTH_SYS etc)?
  • Organizational issues such as availability of the privileges needed to manage identities?
  • Network architecture and user environment? For example, are the connections between NFS clients and NFS server machines placed within a controlled environment (machine room, ipsec etc.)? Are NFS servers visible to machines on which users can run applications?

To determine which solution is appropriate for a given situation requires the administrator to select from the available mechanisms according to the tradeoffs applicable to the expected environment. Typically, solutions should be considered in the following order:

  1. Active Directory
  2. Active Directory Lightweight Directory Services (AD LDS)
  3. Local passwd and group mapping files
  4. Unmapped UNIX User Access (UUUA)

 

NFS Authentication Method

Using AUTH_NONE as the authentication method has no security whatsoever and is equivalent to using anonymous access with AUTH_SYS.

Using AUTH_SYS as the authentication method places no particular restrictions on the mapping method, consideration should be given to the ease with which this method can be spoofed and as such it provides no real security.

If the environment requires that NFS be authenticated by RPCSEC_GSS then standard Windows accounts will be required. This excludes the use of Unmapped UNIX User Access. Although using RPCSEC_GSS directly provides the necessary rights to access files, a mapping solution is generally required since many NFS procedures identify users and groups via their UID and GID values even though access to those files is authenticated by RPCSEC_GSS. For example, using a Windows Server 2012 Server for NFS processing a READDIR request, the ability to read the directory is determined by the user identified through RPCSEC_GSS, but the ownership of the items in that directory are described by UID and GID values. Without a mapping solution, the server is unable to determine the proper UID and GID values and so will indicate the files are all owned by the configured anonymous user account, typically with UID and GID values of 0xfffffffe (or -2).

Domain Joined Machines

Generally the most convenient solution for domain joined machines is to use Active Directory as the mapping store. This is particularly the case if a large fraction of the domain joined machines and / or users will be making use of either or both of the NFS client and server. Using Active Directory helps ensure that there are none of the synchronization issues that occur if there are separate account stores and identity mapping stores.

A possible problem is that if NFS is used by a small fraction of the accounts or machines, then in large organizations it may be organizationally difficult to manage the identities if for example a single department uses NFS and the departmental level administrators do not have the domain level privileges required to modify the centrally managed user accounts.

If using Active Directory for mapping information is problematic but domain based identities are still desired then alternative solutions are either Active Directory Lightweight Directory Services (AD LDS) or local mapping files.

Using AD LDS has the advantage of a centrally managed mapping store which is particularly useful if there are many user and/or group accounts, or if the valid accounts change frequently. The accounts being mapped must be domain accounts. However, there needs to be a machine available which can host the AD LDS services. This can be a machine hosting the Windows NFS services.

Using local mapping files requires only machine local administrator level privileges rather than domain level privileges and provides all the functionality available for a single machine as that available through Active Directory. In addition, they can also allow machine local accounts to be successfully mapped. However, consideration should be given to the number of machines to be managed.

Both AD LDS and local mapping files suffer from the need to maintain synchronization between the primary account store (Active Directory) and the mapping store (AD LDS or local files). For example, if a new NFS user account is added or deleted, then a change will need to be made to the mapping store. The AD LDS mapping store only needs changes to be applied in the one location for all machines using that mapping store to see the updates. However, if local mapping files are in use, then a change will need to be made in all of the copies of the local mapping files that contain a mapping for that account. 

For machines with Server for NFS, if no domain or machine local identities are required and there will be no sharing of the files exported by NFS with any other application or file sharing protocol, and access is via the NFS AUTH_SYS authentication mechanism, then UUUA based access might be a good solution. This method has the advantage of minimal administration load, and there is no co-ordination with any other machine however it has the potentially significant disadvantage of providing essentially no security.

Non-Domain Joined Machines

The choices for non-domain joined machines are similar to those for domain joined machine with the exception that Active Directory is no longer available.

Using Active Directory Lightweight Directory Services (AD LDS) provides a single centrally managed mapping store which is particularly useful if there are many user and/or group accounts, or if the valid accounts change frequently. The accounts being mapped must be machine local accounts and if care is taken over the naming of the account, the same mapping can be used by several machines.  However, there needs to be a machine available which can host the AD LDS services but this can be a machine hosting the Windows NFS services.

Using local mapping files requires only machine local administrator level privileges and provides all the functionality available for a single machine as that available through AD LDS. As long as all the account names do not have a domain prefix, then machine local accounts are assumed so the same passwd/group file pair can be used on each machine. Consideration should be given to the number of machines to be managed and the amount of changes to the accounts being mapped to determine if the administrative costs are acceptable.

Both AD LDS and local mapping files suffer from the need to maintain synchronization between the primary account store (machine local accounts) and the mapping store (AD LDS or local files). For example, if a new NFS user account is added or deleted, then a change will need to be made to the mapping store. The AD LDS mapping store only needs changes to be applied in the one location for all machines using that mapping store to see the updates. However, if local mapping files are in use, then a change will need to be made in all of the copies of the local mapping files that might be used by that account. Alternatively with local mapping files each machine can have individual passwd and group files with accounts specific to that machine; however this is likely to present administrative problems in terms of ensuring the appropriate uniqueness amongst the UID and GID values being used.

For machines with configured with Server for NFS, if there is no sharing of the files exported by Server for NFS with any other application or file sharing protocol, and access is via the NFS AUTH_SYS authentication mechanism, then UUUA based access might be a good solution. This method has the advantage of minimal administration load, and there is no requirement for co-ordination with any other machine, however as with all AUTH_SYS based mechanisms, it has the potentially significant disadvantage of providing essentially no security.

Troubleshooting

To list all the NFS PowerShell cmdlets

To locate all the NFS related PowerShell commands, start a PowerShell session and use the command

Get-Help *Nfs*

 

 

The alias “help” can be used in place of “Get-Help”.

Get-help can then be used on individual items to get additional details on that item.

  • Get-NfsMappingStore will return the currently configured mapping solution for the machine.
  • Test-NfsMappingStore will test the mapping store to confirm that the machine can access the mapping store
  • Get-NfsMappedIdentity is used to retrieve one or more mapped identity records from the configured mapping store.
  • Test-NfsMappedIdentity is used to verify the configured mapping store can be reached from the machine on which the query is run and that the queried mapping is present in that store.
  • Resolve-NfsMappedIdentity is used to determine the mapping being used by Server for NFS. If the mapping is cached then the cached values are used, otherwise Server for NFS will make a request to the configured mapping store to retrieve the mapping.

To Verify That a Particular Identity Mapping is Active

Although the identity mapping can be set in an identity mapping store, there is no guarantee that machines with either Client for NFS and\or Server for NFS can make queries of that store. To determine if the store is accessible from the machine of interest, log on to the machine in question and using the PowerShell cmdlet “Test-NfsMappedIdentity”, the cmdlet will make a request to the store for the mapping information needed to satisfy the request. For example, to test the account mapping for UID value 0

Test-NfsMappedIdentity -UserIdentifier 0

 

Or to test the mapping for the group “specgroup”

Test-NfsMappedIdentity -AccountName specgroup -AccountType Group

 

There will only be output from the command if the test operation fails.

Using the “Test-NfsMappedIdentity” cmdlet will also verify that the mapping information for the account in question does not use any improper duplicate values. That is, the UID value for a user account is unique and the GID value for a group account is unique.

The Server for NFS also keeps a cache of recently used identity mappings. To determine the mapping as currently being used by, or failing that is available to Server for NFS, the Resolve-NfsMappedIdentity cmdlet can be used. This causes the Server for NFS to search the locally cached mapping information, or if there is no local value, to query the configured mapping store for the mapping. In both cases the currently active mapping as known to Server for NFS is returned.

For example, to query for the account mapped to the UID 500

 

Resolve-NfsMappedIdentity -Id 500 -AccountType User

 

Or to query for the UID mapped to the user account “root”

Resolve-NfsMappedIdentity -AccountName root -AccountType User

  

To Verify The Windows NFS Client or Server Is Using Local File Based Mapping.

The NFS services write messages to the ServicesForNfs-Server\IdentityMapping channel to indicate whether or not the local files have been found and if the format is correct. These messages can be examined using the “Event Viewer” utility. If both group and passwd files have been found and are being used there are two messages, one for each file

For the group file


and for the passwd file.

If there are any issues with either file an appropriate message will indicate which file contains the problem.

Correcting Identity Problems on Files and Directories Using The nfsfile.exe Utility

The Services for NFS Administration Tools feature contains a command line utility, nfsfile.exe, which can be used to correct a number of NFS related identity and access permission related issues for both files and directories. It can also be used to convert files between the UUUA style mapping and Windows style mappings.

For example, to set all the directories and files stored at v:\Shares to be owned by the user account “root” and group account “rootgroup” with UNIX style permissions 755 (rwxr-xr-x) use the command

nfsfile /v /rwu=root /rwg=rootgroup /rm=755 v:\Shares\*

 

or if all the files under an export were originally created using UUUA mapping, but there is now a domain based mapping solution available, all the file mappings can be converted using the command

 

nfsfile /v /s /cw v:\Shares\share-v

 

which converts the export and all the files and directories to a Windows style mapping based on standard Windows accounts.

See the MSDN article at http://technet.microsoft.com/en-us/library/hh509022(v=WS.10).aspx and in particular the section titled “Using Nfsfile.exe to Manage User and Group Access”.

Note that currently the nfsfile.exe cannot obtain mapping information from local file based mappings. This means it cannot do the automatic identity conversion between Windows style mapped files and UUUA style mapped files where the utility obtains the mapping information appropriate to the files being processed. Instead the account information must be supplied via the “/r” option, whether that is a UID/GID pair or a Windows user and group accounts on a file by file or single directory sub-tree basis. That is, all the files in a single directory sub-tree can be converted to a single identity in one command, but different users will require multiple commands to be used.

Note also that the utility can also be used to manipulate non-NFS related file permissions. This is not recommended as there are several features of Windows file security and access control that the utility is not designed to process. Instead, the standard Windows file system permission management tools and utilities should be used (e.g. the “icacls.exe” utility).

Feedback

Please send feedback you might have to nfsfeed@microsoft.com

 



[1] See [MS-FSSO] Section 8, Appendix A: Product Behavior, Note 7 (http://msdn.microsoft.com/en-us/library/ee380665(v=prot.10)).

Automatic RMS protection of non-MS Office files using FCI and Rights Protected Folder Explorer

$
0
0

 FCI

File Classification Infrastructure(FCI) provides insight into your data by automating classification processes so that you can manage your data more effectively. The built-in solution for file classification provides expiration, custom tasks, and reporting. The extensible infrastructure enables Microsoft partners to construct rich end-to-end classification solutions that are built upon Windows. For more information on FCI please check the blog post

Rights Protected Folder Explorer

Rights Protected Folder Explorer (RPFe) is a Windows based application that allows you to protect files and folders. A Rights Protected Folder is similar to a file folder in that it contains files and folders. However, a Rights Protected Folder controls access to the files that it contains, no matter where the Rights Protected Folder is located. By using Rights Protected Folder Explorer, you can securely store or send files to authorized users and control which users will be able to access those files while they are in the Rights Protected Folder. For more information please visit the RPFe  blog post.

Protecting highly sensitive data using FCI and RPFe

Today, FCI enabled administrators to automatically RMS protect sensitive information on file servers. We had several requests for enabling FCI to RMS protect other file types and we partnered with the RPFe team to provide a solution that enable that scenario.

Using FCI and RPFe, IT admins can Rights Management Services(RMS) protect any file on a file server. Once the files are protected, only authorized users will be able access those files even if they are copied to another location. To protect non-Microsoft Office file format,  FCI File Management job(FMJ) with custom action and RPFe can be used. We will now explore how to accomplish the task of protecting sensitive files other than Microsoft Office files.  RPFe has a command line utility that can protect files. FCI File Management Job custom action can be used to invoke RPFe command line utility under a desired namespace/Share where the admin wants to protect files automatically.

RPFe Usage:

  • ·         Install RPFe tool on the file server by following the guidelines from here
  • ·         Get the RMS template GUID to be used in the cmd line version. Go to %LOCALAPPDATA% \Microsoft\MSIPC\Templates on the File Server and open the XML file for the template of interest and get the GUID associated with OBJECT ID.
  • ·         Command line usage to protect a file

RPFExplorer.exe /Create /Rpf:"G:\Share\CustomerInfo.txt.rpf" /TemplateId:{00a956d6-d14c-4a2c-bf86-c1e70b731e7b} /File:"G:\Share\ CustomerInfo.txt "

  • o   Parameter Explanation:
    • §  /File is the file that needs to be protected.
    • §  /Rpf is for the new file that will be created which is RMS protected
    • §  /TemplateID is the RMS Template GUID gathered from step 2 above.

 

RPFe Protection

Original file stays the way it is and there is no change made to it. New RMS protected RPFe container is created which will contain a copy of the original file under the same parent directory.

FCI Integration with RPFe

To automate file protection using RPFe and FCI, Please follow the steps mentioned below.  The FMJ custom action calls a PowerShell script for each file that meets the FMJ condition. The PowerShell  script calls RPFe command line utility to protect files.

 

Create a File Management Job with custom action on a desired share with the following configurations

  • ·         Install RPFe tool from here
  • ·         Copy the script in the blog below to a new file called %SystemRoot%\System32\FCIRPFeFileProtection.ps1
  • ·         For exe path, set the parameter to “%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe”
  • ·         For Arguments, set the arguments to File -File “%SystemRoot%\System32\FCIRPFeFileProtection.ps1”  TemplateID [Source File Path] [Source File Owner Email] [Admin Email]
  • ·         Configure the file extensions for files that need to be protected in the condition tab of the FMJ creation wizard. It is recommended to restrict the FMJ to specific file extensions only
  • ·         Additional filters can be added to filter files based on classification properties in the FMJ
  • ·         Mark the File Management Job as continuous in the schedule tab of the FMJ creation wizard

 

File Protection Script

  • ·         FCIRPFeFileProtection.ps1 is a simple PowerShell script that takes in source file path, file owner email, admin email and Template ID as parameters from the File Management Job and calls in RPFe command line utility to protect files. A protected copy of the original file is created in the same folder where the original file existed.
  • ·         The script copies all classification properties from the source file to the protected file. This ensures that all classification information is passed on from the original file to the protected file.
  • ·         Please make sure to configure the FMJ to run on specific extensions. If the FMJ is marked as continues and configured to run on all file types, ( say on P:\foo) and a new file P:\foo\file.txt is created then recursive FMJ kicks in. First P:\foo\file.txt.rpf is created which will cause RMJ to act on it creating P:\foo\file.txt.rpf.rpf and so forth. It is recommended to restrict the FMJ to specific file extensions only.
  • ·         Please note that the script creates a protected copy for a file and the original file still remains in the share. Care has to be taken to have enough space on a volume to accommodate protected copies of sensitive data. If you intend to delete the original file after the file is successfully protected please remove the comment in line “remove-item $encryptfile” and test it in your environment before deployment.
  • ·         Script returns error back to the FMJ. Any file that encountered an error will be reported in the FMJ error report and log.
  • ·         FCIRPFeFileProtection.ps1 -: Below is a sample PowerShell script that protects files using RPFe

 

#

#             Main Routine Begin

#

$TemplateID = $args[0]

 $encryptfile =  $args[1]

$newfile = $encryptfile + ".rpf"

 # verify that the new file name does not exist and attempt to find a new name

$ver = 0

while (Test-Path $newfile)

{

   $ver = $ver + 1

   $newfile = $encryptfile + $ver + ".rpf"

   if ($ver –gt 100) {

                exit  -1 # could not find a good name for the rpf file

         }

}

 # get the owner of the file, if not found use the supplied administrator email address

$owneremail = $args[2]

if ($owneremail -eq "[Source")

{

   $owneremail = $args[6]

}

 # run the RPF Explorer to encrypt the file

$arguments = "/Create /Rpf:"+ "`""+$newfile +"`"" +" /TemplateId:"+ $TemplateID +" /File:"+"`""+$encryptfile +"`"" +" /Owner:"+$owneremail

$run = start-process –Wait –PassThru –FilePath "C:\Microsoft_Rights_Protected_Folder_Explorer\RPFExplorer.exe" –ArgumentList $arguments

if ($run.ExitCode –eq 0)

{

     # transfer properties from the old file to the new file

   $cm = New-Object -comobject FSRM.FSRMClassificationManager        

   $props = $cm.EnumFileProperties($encryptfile, 1)

    try

    {

       foreach ($prop in $props)

       {

           $cm.SetFileProperty($newfile, $prop.Name, $prop.Value)

       }

    } catch [Exception] {

       remove-item $newfile

       exit -1

    }

 

# remove-item $encryptfile

# The original file can be removed after successfully creating a protected copy.

# Before adding the above remove-item line, please test in your environment and verify that there is no data loss

 }

exit $run.ExitCode

 #

#             Main routine end

#

 RPFe files on Non-Windows machines

RPF files don’t get recognized on other non-windows devices. This is because there is no AD RMS client available on non-windows platforms. Also non-windows users wont be able to consume RPF files.

 

 



 

 

 

 

 

Viewing all 268 articles
Browse latest View live