Quantcast
Channel: Storage at Microsoft
Viewing all 268 articles
Browse latest View live

Getting started with Central Access Policies - Reducing security group complexity and achieving data access compliance using Dynamic Access Control

$
0
0

Following the introduction to Dynamic Access Control, I’d like to give you a few practical examples of how you can utilize Dynamic Access Control in your File Server environment.

For many IT and Security architects that we talked to about Dynamic Access Control, the first reaction has been that this is a very powerful feature set. Then comes the obvious question of how do I get started? What information should I target first? …

Let’s explore two common Central Access Policy scenarios that this technology helps you solve:

1. Safe Harbor access compliance

2. Reducing the complexity for security groups

For both examples, I am going to use Dynamic Access Control with security groups so that all you need in your environment to achieve this are:

· A Windows Server 2012 File Server

· A domain with a Windows Server 2012 schema (so that you can define central access policies)

You can find a demo video that shows the deployment steps of the two scenarios here

Safe Harbor - Access compliance sample scenario

If your company needs to deal with data that falls under the US-European Union Safe Harbor regulation, Dynamic Access Control can help you limit access to that data to the specific group of people allowed access to Safe Harbor data.

The steps to achieve this are:

1. Tagging the data by marking the folders that contain Safe Harbor data (Compliancy=Safe Harbor)

2. Configuring a Central Access Rule that specifies that only specific security groups can access data that is marked as Safe Harbor

3. Applying a Central Access Policy to the appropriate Windows Server 2012 File Servers in your organization

Note that there is no need to change the current NTFS access rules since the Central Access Policy acts as a safety net that enhances the local NTFS access rules.

Now, when someone wants to review who can access Safe Harbor information, you can simply show the security group(s) specified in the central access rule.

You can find instructions on how to configure Central Access Policies in our TechNet documentation (http://technet.microsoft.com/en-us/library/hh831717.aspx) that includes both step-by-step and PowerShell Cmdlets. Since the documentation provides a more general example that also includes user claims, here are a few screenshots that show the specific configuration for the Safe Harbor scenario:

(Note: This walkthrough is for illustration only since it skips some of the configuration steps such as group policy. For a complete walkthrough please use the abovementioned TechNet documentation)

· Using Active Directory Administration Center to enable the pre-defined “Compliancy” resource property so that it can be used to tag files

clip_image002

Here are the PowerShell commands and syntax to enable the “Compliancy” property definition:

Set-ADResourceProperty Compliancy_MS -Enabled:$true -Server:"DC1.contoso.com"

· Using Active Directory Administration Center to create a Central Access Rule for Safe Harbor

clip_image004

Here are the PowerShell commands and syntax to define the “Safe Harbor” Central Access Rule:

New-ADCentralAccessRule -CurrentAcl:"O:SYG:SYD:AR(A;;FA;;;WD)" -Name:"Safe Harbor Central Access Rule" -proposedAcl:"O:SYG:SYD:AR(A;;FA;;;OW)(A;;FA;;;SY)(A;;0x1301bf;;;S-1-5-21-3479418876-2348745693-2649290146-1224)" -ProtectedFromAccidentalDeletion:$true -ResourceCondition:"(@RESOURCE.Compliancy_MS Any_of {`"US-EU Safe Harbor`"})" -Server:"DC1.contoso.com"

· Using the Folder properties Classification tab to tag a folder with Compliancy = Safe Harbor

clip_image006

Here are the PowerShell commands and syntax to set the “Compliancy” to “US-EU Safe Harbor” on a specific folder:

$cls = new-object -com Fsrm.FsrmClassificationManager

$cls.SetFileProperty("d:\data shares\germany", "Compliancy_MS", "US-EU Safe Harbor")

· Assigning a Central Access Policy (that includes the Safe Harbor Central Access Rule) to a File Server share using Folder properties à Security à Advanced

clip_image008

Here are the PowerShell commands and syntax to set a Central Access Policy on a folder:

(Note that to get to this step we skipped the distribution of the central access policy through group policy that is covered in the TechNet documentation)

$acl = get-acl “d:\Data shares” –audit

set-acl “d:\Data shares” $acl “Contoso central access policy”

Reducing the complexity for security group – Need to know sample scenario

In this example, I’d like to describe a generic common scenario that shows how you can use Dynamic Access Control to considerably reduce the complexity of a combinatorial number of security groups (e.g.: From 2,000 to less than a 100) so that you can have a clear understanding of who can access what data and you are able to easily adjust access when people move between different roles in the company.

The example is for a company that has branches in 50 countries and has 20 different departments and for each department in each country there is some sensitive information.

clip_image010

If we structure the folders as shown above, we would need more than 2,000 groups to cover all the combinations so that we can assign the appropriate access control.

You will then need to define an access control list (ACL) for each of the folders using the designated security groups for that folder structure. This will result in more than 2,000 different ACLs.

Last, if a person from the Finance department with access to sensitive information moves to the HR department, we need to change at a minimum 100 security groups (Remove from 50 and add to 50)

So, we have 2,000 groups, 2,000 ACLs and many groups that are affected by a person changing a role not to mention the complexity of adding another level (say Branch) or the implications if we want to change the folder structure.

With Dynamic Access Control, you can cut the number of groups down from 2,000 to 71 (50 for country, 20 for department and 1 for sensitive data access). This is made possible by the ability to use expressions in Windows ACL. For example: You would use MemberOf (Spain_Security_Group) AND MemberOf (Finance_Security_Group) AND MemberOf(Sensitive_Security_Group) to limit access to Spain’s finance department sensitive information.

In addition, if a person from the finance department with access to sensitive information moves to the HR department, you would need to make only 2 changes to the security groups (vs. 100 changes) removing the person from the finance security group and adding him/her to the HR security group.

Still, if you want to apply the conditions to each folder, you would need more than 2,000 ACLs. This is where Central Access Rules in combination of data tagging can be used to reduce the number of access conditions to 71 matching each group with the appropriate data tagging. By using this method for controlling access, it is also easy to add and make changes to the folder structure.

To use Central Access Rules for reducing the complexity, you would tag each country folder with the appropriate country value, each department folder with the appropriate department value and the sensitive folder with a high sensitivity value. Then you would configure three Central Access Rules: one for country, one for department and one for sensitive information.

The country Central Access Rule would govern the access to each country sub tree and will contain 50 different access conditions such as: Resource.Country = US AND User.MemberOf(US_Security_Group) which allows only people from the US_Security_Group to access files that are tagged as Country=US.

The department Central Access Rule would govern the access to each department sub tree and will contain 20 different access conditions such as: Resource.Department = Finance AND User.MemberOf(Finance_Security_Group) which allows only people from the Finance_Security_Group to access files that are tagged as Department=Finance.

Last, the sensitive information Central Access Rule would govern the access to each of the sensitive information sub tree and will contain 1 access condition: User.MemberOf(Sensitive_Security_Group).

The configuration is very similar to the Safe Harbor configuration outlined above. Here are the steps for creating the “Country” related access rule:

· Using Active Directory Administration Center to create “Country” resource property and populate it with the list of allowed values

clip_image012

Here are the PowerShell commands and syntax to create the “Country” property definition:

New-ADResourceProperty -DisplayName:"Country" -IsSecured:$true -ProtectedFromAccidentalDeletion:$true -PassThru:$null-ResourcePropertyValueType:"CN=MS-DS-SinglevaluedChoice,CN=Value Types,CN=Claims Configuration,CN=Services,CN=Configuration,DC=contoso,DC=com" -Server:"DC1.contoso.com" -SuggestedValues:@((New-Object Microsoft.ActiveDirectory.Management.ADSuggestedValueEntry("Canada", "Canada", "")), (New-Object Microsoft.ActiveDirectory.Management.ADSuggestedValueEntry("China", "China", "")), (New-Object Microsoft.ActiveDirectory.Management.ADSuggestedValueEntry("Germany", "Germany", "")), (New-Object Microsoft.ActiveDirectory.Management.ADSuggestedValueEntry("Spain", "Spain", "")), (New-Object Microsoft.ActiveDirectory.Management.ADSuggestedValueEntry("US", "US", "")))

Add-ADResourcePropertyListMember "Global Resource Property List" -Members Country_88cf03590d9e3476 -Server "DC1,contoso.com"

· Using Active Directory Administration Center to create a Central Access Rule for access to specific country information

clip_image014

Here are the PowerShell commands and syntax to create the “Country” Central Access Rule:

(Note that this is based on security group IDs and the unique ID of the Country property defined above)

New-ADCentralAccessRule -CurrentAcl:"O:SYG:SYD:AR(A;;FA;;;WD)" -Name:"Country based rule using groups" “-ProtectedFromAccidentalDeletion:$true -ProposedAcl:"O:SYG:SYD:AR(A;;FA;;;OW)(A;;FA;;;BA)(A;;FA;;;SY)(XA;;0x1301bf;;;AU;((Member_of {SID(S-1-5-21-3479418876-2348745693-2649290146-1223)}) && (@RESOURCE.Country_88cf03590d9e3476 == `"Spain`")))(XA;;0x1301bf;;;AU;((Member_of {SID(S-1-5-21-3479418876-2348745693-2649290146-1221)}) && (@RESOURCE.Country_88cf03590d9e3476 == `"China`")))(XA;;0x1301bf;;;AU;((Member_of {SID(S-1-5-21-3479418876-2348745693-2649290146-1220)}) && (@RESOURCE.Country_88cf03590d9e3476 == `"Canada`")))(XA;;0x1301bf;;;AU;((Member_of {SID(S-1-5-21-3479418876-2348745693-2649290146-1222)}) && (@RESOURCE.Country_88cf03590d9e3476 == `"Germany`")))(XA;;0x1301bf;;;AU;((Member_of {SID(S-1-5-21-3479418876-2348745693-2649290146-1218)}) && (@RESOURCE.Country_88cf03590d9e3476 == `"US`")))" -ProtectedFromAccidentalDeletion:$true -ResourceCondition:"(Exists @RESOURCE.Country_88cf03590d9e3476)" -Server:"DC1.contoso.com"

· Using the Folder properties Classification tab to tag a folder tree with Country = US

clip_image016

Here are the PowerShell commands and syntax to set the “Country” property to “US” on a folder:

(Note that the country resource property has a unique identifier that is automatically assigned on creation and this identifier is used when calling SetFileProperty)

$cls = new-object -com Fsrm.FsrmClassificationManager

$cls.SetFileProperty("d:\data shares\us", "Country_88cf03590d9e3476", "US")

· Assigning a Central Access Policy that includes the three Central Access Rules for Country, Department and Sensitive information to the root of the File Server share for all countries (note that this is exactly the same screenshot as the one we use for Safe Harbor since we can use one Central Access Policy that contains both access control for Safe Harbor information and countries/department information)

clip_image017

Here are the PowerShell commands and syntax to set a Central Access Policy on a folder:

(Note that to get to this step we skipped the distribution of the central access policy through group policy which is covered in the TechNet documentation)

$acl = get-acl “d:\Data shares” –audit

set-acl “d:\Data shares” $acl “Contoso central access policy”

Nir Ben-Zvi


iSCSI Target cmdlet reference

$
0
0

Windows Server 2012 comes with a complete set of PowerShell cmdlets for iSCSI. By combining the iSCSI target, iSCSI initiator, and storage cmdlets, you can automate pretty much all management tasks. This blog post will provide you with some examples. If you want more information about iSCSI Target and its basic configuration cmdlet, please see Introduction to iSCSI Target in Windows Server 2012.

 

Cmdlet Basics

Modules

The Cmdlets are grouped into modules. To get all the cmdlets in a module, type “get-command –module <name>”. For example:

iSCSI Target cmdlets:

Get-command -module iSCSITarget

iSCSI Initiator cmdlets:

Get-command -module iSCSI

Volume, partition, disk, storage pool and related cmdlets:

Get-command -module storage

General Windows PowerShell guidelines were followed in the development of these cmdlets, using the standard Windows PowerShell cmdlet verbs to manage the objects. If you are familiar with powershell, it is quite easy to pick up these modules.

To learn more about iSCSI Initiator cmdlets, see iSCSI Cmdlets in Windows PowerShell

To learn more about iSCSI Target cmdlets, see iSCSI Target Cmdlets in Windows PowerShell

To learn more about storage cmdlets, see Storage Cmdlets in Windows PowerShell

Another very good reference for cmdlet management for file and storage-related scenarios: Windows PowerShell Reference Sheet for File and Storage Services in Windows Server 2012

Requirements

The iSCSI initiator cmdlets require the msiscsi service. By default, this service is stopped and not set to start automatically. The first time you configure the iSCSI initiator, you should start this service as well as change the service startup type to Automatic:

Start-Service msiscsi
Set-Service msiscsi –startuptype “automatic”

Note: if you open the iSCSI initiator control panel applet, it will prompt you to start this service and change the service startup type to Automatic.

iSCSI Target Server module is loaded by default in Windows Server 2012. However, the cmdlets cannot be run successfully without the iSCSI Target Server role service installed. To install it, run:

Add-WindowsFeature FS-iSCSTarget-Server

Specific cluster-related cmdlet to create the “iSCSI Target Server Clustered role” requires the Failover Clustering feature. You can enable this feature and its management tool using the following:

Add-WindowsFeature Failover-Clustering

Add-WindowsFeature RSAT-Clustering

Add-WindowsFeature RSAT-Clustering-Mgmt

Add-WindowsFeature RSAT-Clustering-PowerShell
 

Cmdlet Nouns

The iSCSI Initiator and iSCSI Target modules are both loaded by default on Server,. The nouns managed by both modules are prefixed with Iscsi. This section explains these nouns.

To better understand iSCSI initiator, see Understanding Microsoft iSCSI Initiator Features and Components. Although the article specially mentions Windows 7 and Windows Server 2008 R2, the concepts still apply to Windows 8 and Windows Server 2012. For additional reference on terminology, see Introduction to iSCSI Target in Windows Server 2012.

The diagram below illustrates the relationship of the cmdlet nouns in a simplified view:

image

The following nouns are managed, accessed, and hosted on the iSCSI initiator computer:

  • IscsiTarget: This object is used to identify the iSCSI target on the server from the initiator side. It is the object which the iSCI initiator can connect to. The initiator can only see those targets which allow it access.
  • IscsiConnection: the TCP connection between the iSCSI initiator and target.
  • IscsiSession: the session established between the initiator and target. For each session, there could be one or more connections. In Windows Server 2012, iSCSI Target Server supports one connection per session. You can register a session such that the initiator will automatically reconnect to the target after reboot. If the initiator connects the target multiple times, there will be multiple sessions, you will need to enable MPIO feature on the initiator to view the iSCSI LUNs.
  • IscsiTargetPortal: specifies the iSCSI Target Server name or IP address.
  • IscsiChapSecret: specifies the iSCSI target CHAP secret be used during login.

The following cmdlet nouns are hosted on and managed by the iSCSI Target Server computer:

  • IscsiServerTarget: the target object on the iSCSI Target Server computer. By default, iSCSI targets are not accessible by any initiators. You will need to configure the initiatorIDs to specify the initiators that can access it. There are four ways to identify an iSCSI initiator: IQN, DNS, IP (or IPv6) or MAC address. The preferred assignment is to use IQN because the iSCSI target can compare the IQN from the incoming connection and determine if it is allowed for access. With DNSName assignment, the iSCSI target will need to resolve the name with a DNS server, which takes longer for login. If you have a static IP environment, IP address can be used for the assignment. To keep the management simple, it is better to use the same assignment method for the initiatorID. For example, if you use IP address for assignment, always use it, so that you can easily identify the initiators in the cmdlet queries. In later examples, you will see how the initiatorIDs can be used in the cmdlets.
  • IscsiVirtualDisk: represents the iSCSI virtual disk objects. The iSCSI virtual disk objects are mapped to a VHD file. Removing the IscsiVirtualDisk object will not automatically delete the VHD files. If you want to load a VHD file, you will need to import it first.

Importing VHD files to iSCSI Target Server on Windows Server 2012 is supported as follows:

image

Note:

1. For differencing VHDs, you will need to run Convert-IscsiVirtualDisk before importing them.

2. iSCSI Target Server doesn’t support dynamic virtual disks and you cannot create dynamic VHDs using iSCSI Target.

3. iSCSI Target Server doesn’t support dynamic virtual disks and you cannot import dynamic disks created by Hyper-V.

4. iSCSI Target Server doesn’t support VHDX and you cannot create VHDX using iSCSI Target Server.

5. iSCSI Target Server doesn’t support VHDX and you cannot import VHDX created by Hyper-V.

  • IscsiVirtualDiskSnapshot: represents the snapshots for the virtual disk objects. iSCSI Target Server supports taking snapshot of the virtual disks for backup.
  • IscsiVirtualDiskTargetMapping: represents the assignment between the iSCSI target and the iSCSI virtual disks
  • IscsiTargetServerSetting: allows you to specify common settings for iSCSI Target Server.

 

Cmdlet Help

Cmdlet help has a new mechanism to allow dynamic update of the content from the Internet. Before using the help, you need to run “Update-help –module <name>” to get the latest help content. This implies the computer has Internet connectivity. To get help for all modules, type “Update-help”.

To use the help, type <cmdlet name> -?

For example:

Get-IscsiServerTarget -?

If you search for the cmdlets that can manage iSCSIVirtualdisk, you can type:

Get-Command *IscsiVirtualDisk*

This is a great way to find all the cmdlets that can manage the iSCSIVirtual disks.

 

Management and Maintenance Tasks

To learn more about iSCSI Target configuration cmdlets, see Introduction to iSCSI Target in Windows Server 2012. This blog post will focus on the following iSCSI management and maintenance tasks to:

  1. Get all iSCSi targets associated with a virtual disk
  2. Get all virtual disks associated with an iSCSI target
  3. Get all iSCSI initiators accessing a virtual disk
  4. Get all virtual disks used by an iSCSI initiator
  5. Replace an iSCSI initiator assignment
  6. Add aniSCI initiator to an iSCSI target
  7. Remove all virtual disks from an iSCSI Target
  8. Delete all iSCSI initiators from an iSCSI target
  9. Delete all virtual disks that are not assigned to any iSCSI target
  10. Get all iSCSI Targets that have not been used or connected for seven days
  11. Manage iSNS Server registration

The following diagram illustrates these tasks. To setup this scenario, use the iSCSI cmdlets listed in Introduction to iSCSI Target in Windows Server 2012.

image

On the iSCSI Target server, there are five iSCSI target objects and five virtual disk objects. Target1 is assigned to Init1, which can access two virtual disks. Target2 is assigned to Init2, and Target3 is assigned to Init3. Both Init2 and Init3 have access to iSCSI Virtual Disk 3. Target4 is not assigned to any initiators. Target5 has no association to any initiators or virtual disks. iSCSI Virtual Disk 5 has no association to any iSCI target.

1. Get all iSCSI targets associated with a virtual disk

Using the above example, let’s say something went wrong with iSCSI Virtual Disk 3 and you want to find all the iSCSI targets associated with this virtual disk. You can run the following:

Get-IscsiServerTarget –path C:\iSCSIVirtualDisks\LUN3.VHD
image

For the example above, the output shows all the iSCSI targets (Target2 and Target3) associated with the virtual disk (c:\Iscsivirtualdisks\LUN3.vhd).

2. Get all virtual disks associated with an iSCSI target

In this example, the Target1 got disconnected for some reason, you want to find all the iSCSI virtual disks which are associated with this Target:

Get-IscsiVirtualDisk –TargetName <name>
image

The output shows all the Virtual disks (LUN1.VHD and LUN2.VHD) which are associated with Target1.

3. Get all the iSCSI initiators accessing a virtual disk

Suppose one of the virtual disks (C:\IscsiVirtualDisks\LUN3.VHD) is having issues, and you want to disable it. Before doing that, you want to find all the initiators which will be affected:

(Get-IscsiServerTarget -Path C:\IscsiVirtualDisks\LUN3.VHD).InitiatorIds
image

The cmdlet shows the initiators that are accessible by the virtual disk (C:\IscsiVirtualDisks\LUN3.VHD)

4. Get all virtual disks used by an iSCSI initiator

Say one of the initiators (init1) is going offline and; you want to determine all the virtual disks which can be accessed by Init1 using its IQN:

(Get-IscsiServerTarget -InitiatorId "IQN:iqn.1991-05.com.microsoft:init1.contoso.com"). 
LunMappings.Path

image

Note:

  • The virtual disks (c:\iscsivirtualdisks\LUN1.vhd and c:\ iscsivirtualdisks\LUN 2.vhd) may not be exclusively assigned to Init1. You can use the cmdlets in the previous task to find all the initiators for each virtual disk.
  • Make sure you use the right ID to identify the initiator. If you used DNSName during the initiator assignment, you will need to use the same DNSName in the cmdlet.

5. Replace an iSCSI initiator assignment

Suppose there has been a change to iSCSI initiator computer Init1 and a new computer Init5 that is replacing it. You want to assign to Init5 all the iSCSI targets which were associated with Init1.

Get-IscsiServerTarget -InitiatorId "IQN:iqn.1991-05.com.microsoft:Init1.Contoso.com" | 
foreach {Set-IscsiServerTarget $_.TargetName –InitiatorIds 
"IQN: iqn.1991-05.com.microsoft:Init5.Contoso.com "}

Note: This cmdlet will remove all existing initiator assignments for the given targets, and replace it with the new initiator ID. To add initiators, use the cmdlet in the next section.

6. Add an iSCSI initiator to an iSCSI target

You want to add a new computer Init4 and build a two- node cluster with Init1. To do so, you need to add Init4 to the Target1 InitiatorID so that Init4 can also logon to Target1 to access the shared storage:

$c = new-object Microsoft.Iscsi.Target.Commands.InitiatorId
("iqn:iqn.1991-05.com.microsoft:init4.contoso.com")

image

Set-Iscsiservertarget Target1 -initiatorids {$_.initiatorids + $c}

Note: $c is to remember the existing initiators assigned to the Target.

If you don’t remember which iSCSI target Init1 was associated with, use the following:

Get-IscsiServerTarget –InitiatorId “iqn:iqn.1991-05.com.microsoft:init1.contoso.com” | 
Set-Iscsiservertarget -initiatorids {$_.initiatorids + $c}
image

The Target now has both Init1 and Init4 added to allow access.

Note: You can also get the IQN of the initiator on the local computer by running:

Get-InitiatorPort | fl –property NodeAddress
image

7. Remove all virtual disks from an iSCSI target

While doing maintenance on the iSCSI target, you find Init3 is accessing the same iSCSI virtual disk as Init2. You want to reassign Init3 to Target4 and remove Target3. You cannot delete the iSCSI target if there are any virtual disks associated with it, so you will need to remove all the iSCSI Virtual disks from Target3 first.

(Get-IscsiServerTarget Target3).LunMappings|Remove-IscsiVirtualDiskTargetMapping

image

Check that Target3 has no iSCSI virtual disks associated with it. Now you can proceed with the next task to remove Iinit3 from Target3 before deleting the iSCSI target.

8. Delete all iSCSI initiators from a given iSCSI target

Let’s remove the initiators associated with Target3. Deleting the initiators is equivalent to set the initiator assignment to “null”:

Set-IscsiServerTarget Target3 -InitiatorIds @()
image

You can safely remove the iSCSI target by running:

Remove-IscsiServerTarget Target3

9. Delete all virtual disks which are not assigned to any iSCSI target

n our example, you may notice iSCSI Virtual Disk 5 is not assigned to any iSCSI target. It is easy to delete a specific virtual disk if you know what it is. The script below illustrates a more generic approach to find and delete all the disks which are not assigned to any iSCSI target.

It’s a little tricky, so we list two different approaches:

  • You can find the virtual disks which do not have any iSCSI target by running:
Get-iscsivirtualdisk | where { $targets = get-iscsiservertarget -path $_.path; 
$targets.count -eq 0} | Remove-Iscsivirutaldisk
image
  • Or you can get all the virtual disks and pick out the ones that are assigned - the ones remaining are the “unassigned” virtual disks:
$AllDisks = (Get-IscsiVirtualDisk).Path

$AssignedDisks = (Get-IscsiServerTarget).LunMappings.Path

$AllDisks | where {$AssignedDisks -notcontains $_} | 
foreach {Remove-IscsiVirtualDisk $_ }

In the last removal of the iSCSI virtual disk, you can also delete the VHD file by adding:

$AllDisks | where {$AssignedDisks -notcontains $_} | 
foreach {Remove-IscsiVirtualDisk $_ ; rm $_ }

10.Get all iSCSI targets that have not been used for seven days

To see which iSCSI targets are not being used or connected to, we introduced a new parameter “IdleDuration” on the iSCSI Target object. This records the time since the last session was disconnected from an iSCSI target. In conjunction of the “LastLogin” parameter, you can get an idea of how long this target was in use versus not being used. In this example, I want to find all the targets which have been sitting idle for the past week:

Get-IscsiServerTarget | where { $_.IdleDuration –ge [timespan]”7.00:00:00” }

Note that both “IdleDuration” and “LastLogin” are not persistent, so they will be reset if the iSCSI Target service was restarted.

11.Manage iSNS server registration

The iSNS server registration can be done using the following cmdlets, which manages the WMI objects.

To add an iSNS server:

Set-WmiInstance -Namespace root\wmi -Class WT_iSNSServer –Arguments 
@{ServerName="ISNSservername"}

To view iSNS server settings:

Get-WmiInstance -Namespace root\wmi -Class WT_ iSNSServer

To delete an iSNS server:

Get-WmiInstance -Namespace root\wmi -Class wt_isnsserver –Filter 
“ServerName =’iSNSServerName’” | Remove-WmiInstance 

Conclusion

I hope these examples give you some ideas on how piping can be used with iSCSI Target cmdlets. If you have questions not covered, please raise it in the comments, so I can address it with upcoming postings.

Introduction to SMI-S

$
0
0

About ten years ago, a group of engineers from different storage companies got together to create a powerful management abstraction that would enable tools for centralized storage environments like storage area networks (SAN) to become less vendor-specific but still allow any vendor to fully surface the capabilities of their storage hardware. One of the original goals was to build a model that relied on industry standards, largely from the DMTF, W3C and IETF. This included using a common model and schema (the Common Information Model or CIM). Unfortunately a model and schema only get you so far; they don’t give you something that can be implemented. You need a transport and a language to encapsulate the model. At the time, the logical choice for encoding the data was XML, specifically CIM-XML (also standardized in the DMTF) and the logical transport was HTTP. Collectively these technologies are known as Web-Based Enterprise Management, or WBEM.

[There are other choices for transport now available, such as WS-Management, a SOAP protocol that is also on the way to becoming an international standard, and the Windows-only WMI COM implementation of WBEM. I’ll speak more about those technologies at a later time, as they are also key to Microsoft’s standards-based management approach.]

The direction had been set in the Storage Network Industry Association, but the work moved into a private consortium known as PDP for reasons that I won’t go into here. A few years later, the group handed the work back to SNIA as a draft called Bluefin, which later became the Storage Management Initiative Specification, or SMI-S 1.0.

So what is SMI-S? Well, it’s basically a cookbook that explains how to use the model elements to do useful work, like discover and provision storage. Actually, it has grown into multiple cookbooks now, divided into topics like Common Profiles, Block, Filesystems, Fabric, etc. The SNIA has active working groups updating the specification every year or two to keep up with the latest technologies. The current (recently) published version is 1.6.0, with some updates expected by the end of the year.

A closer look

Let’s look at the different components that make up SMI-S:

Providers

An SMI-S provider is a software component produced by or for a particular storage product or family of products. It implements an HTTP server, it can parse XML requests, and it knows how to control those specific devices which usually have their own unique management interfaces. The provider can be embedded in the firmware or operating system of the device itself, or it can exist as a standalone application running on a general purpose computer operating system. The latter model is called a “Proxy” provider, and it is the most common way to implement SMI-S providers at this time.

Profiles

Profiles are a way of describing what components of the schema will be used to describe a particular management domain. For example, there is an Array profile. If a vendor claims support for the Array profile, they also agree to implement the classes defined in that profile, the specific properties and methods of those classes deemed necessary or optional to do useful work, and they also have to implement other related component profiles or subprofiles.

Profiles and subprofiles also tell you what the array can do. If the array can perform snapshotting or cloning of storage volumes, the provider can advertise that capability by claiming support for Replication Services. Application developers then know that they can use the CreateElementReplica method to create a new copy of the storage volume – an incredibly powerful capability used heavily in virtual machine management.

Profiles were created by SNIA and also by the DMTF. In some cases, SNIA specializes the profiles for use with SMI-S.

Classes

A class is a set of properties and methods that pertain to a specific object, or to an aspect of an object. Classes typically inherit properties from base or superclasses. An example class is a Disk Drive (in the CIM model, CIM_DiskDrive). Some properties of the disk drive might be what kind of drive it is (SSD, hard disk, etc.), how fast it spins, and what it’s doing right now.

A vendor can introduce new properties and methods by creating their own version of a class, inheriting from the base class and then adding their uniqueness to their derived class. In that way, the model allows extensibility while still preserving the common functionality that any application can use without understanding the specific value-added capabilities of the device it is trying to manage.

Most of the classes used by SMI-S are standardized in the DMTF CIM schema. There are over 1500 classes already defined, and new classes and additions to existing classes can happen frequently as the schema gets updated about three times a year.

The provider creates an instance of a class when it represents the actual objects. In our disk drive example, there will be an instance of the CIM_DiskDrive (or vendor-derived version of that class) for each physical drive in the storage array. An enumeration operation will retrieve all the instances, or a specific instance can be retrieved by itself.

Associations

It isn’t really enough to just count up all the objects of a particular class. Objects of one class might be related to another class and this is represented by associations. So a disk drive would be associated directly or indirectly to the object that represents the storage array. That way you can start at the top and find the related objects, or you can go from the bottom up to the top.

What Microsoft is doing with SMI-S

Microsoft has become an active participant in the SNIA technical working groups and also in the SMI-Lab plugfests. Over the past few years we have been testing hardware vendors’ providers with our client interfaces, first with System Center Virtual Machine Manager 2012, and later with Windows Server 2012. One goal was to make sure the most important scenarios for these products will work well with our interpretation of the specifications. And another objective was to allow for extensibility so that anything exposed through SMI-S could be leveraged on Windows without having to redo the considerable plumbing for interacting with a remote WBEM client. I’ll present more about this “passthrough” capability in a future blog.

What it means for Windows customers

As part of our commitment to management standards, Microsoft developed support for SMI-S and now includes this support with VMM 2012 and Windows Server 2012. Working in conjunction with the new storage management interfaces in Windows Server (see the references below), SMI-S enables command line access and easy scripting of many common storage tasks using Windows PowerShell and can also be used with the newly enhanced File and Storage Services role in Server Manager.

SMI-S support is implemented as an optional feature in Windows Server called Windows Standards-Based Storage Management. Keep in mind that it is designed for consolidated storage environments, and although most providers support simultaneous access from multiple clients, it is a best practice to also centralize the management of those devices. It would be unnecessary (and unwise) for the majority of servers to all be managing the same resources individually.

What it means for Hardware vendors

Because SMI-S is an industry standard built on top of other standards and widely implemented in other environments (such as VMware® and AIX®), vendors no longer need to produce proprietary providers for one particular operating system (or release). They can expose all of their product’s goodies using one provider as extensibility is inherent in the object-based approach of CIM. As client applications learn to take advantage of the advanced features, they can do so without resorting to yet another provider-model. CIM also has a good backward and forward compatibility story.

Next up…

On Windows Server 2012, once an SMI-S Provider is configured for use with the service, the storage will be manageable using the new Windows Storage Management API, and the Storage module in Powershell.

In the next post, I will explain how to get started using SMI-S with Windows Server 2012. Some assembly is required, but you should be up and running in a few minutes.

References

To see more about Microsoft’s commitment to standards and storage management, refer to Jeffrey Snover’s blog on this topic.

System Center 2012 Virtual Machine Manager also uses SMI-S providers. See http://technet.microsoft.com/library/gg610600.aspx for more information about storage automation and VMM.

Refer to the following section on technet for more information about the Storage cmdlets http://technet.microsoft.com/en-us/library/hh848705.aspx or the Storage Management team blog for the latest information at http://blogs.msdn.com/san

Learn more about the Storage Management Initiative at http://snia.org/forums/smi.

SMI-S is based on many industry and worldwide (ISO) standards:

CIM, WBEM and WS-Management are the products of the Distributed Management Task Force. For more information, see the DMTF website.

The Service Location Protocol, Version 2 (SLPv2), and Transport Layer Security (TLS, including Secure Sockets Layer or SSL) are protocols from the Internet Engineering Task Force (IETF).

The current specification for Hypertext Transport Protocol (HTTP) is a joint effort by the IETF and the World Wide Web Consortium (W3C).

 

Large Scale Study and System Design for Primary Data Deduplication accepted by USENIX

$
0
0

Microsoft Research (MSR) and the Windows File Server team worked together to build a new Data Deduplication feature in Windows Server 2012. This feature came from 2 years of collaboration with MSR on the design. The development of the architecture and the algorithms we use for deduplication was driven, in part, by analysis of data in a large global enterprise. The USENIX Annual Technical Conference (ATC) was held on June 13-15, and we submitted a Large Scale Study and System Design paper and gave a talk about our findings. The new paper and presentation video have just gone public on the USENIX website.

The paper describes the algorithms used to chunk data, identify unique data chunks using indexes on chunk hashes, and how to scale deduplication resources on large amounts of data, including performance evaluation numbers. The paper and talk give a review of the advanced analysis carried out on the datasets and how the insights were used to determine design points that address the challenges of primary data deduplication. Many of the design decisions for deduplication were made to create a balance of on-disk space savings, resource usage, performance, and transparency. The key feature is that deduplication can be installed on primary data volumes without impacting the server’s regular workload and still offer significant savings.

Overview:

  • A large-scale study of primary data deduplication on 7TB of data across 15 globally-distributed servers in a large enterprise.
  • Architecture overview of deduplication in Windows Server 2012 and the design decisions that were driven by data analysis.
  • How deduplication is made friendly to the server’s primary workload, how CPU, memory and disk IO resource usage for deduplication scales efficiently with the size of the data.
  • Highlights of the innovations that went into the areas of data chunking / compression, chunk indexing, data partitioning and reconciliation.

Primary data serving, reliability, and resiliency aspects of the system are not covered in this paper.

Check out the live video of the talk given by Sudipta Sengupta and Adi Oltean and download the PDF of the paper here: https://www.usenix.org/conference/usenixfederatedconferencesweek/primary-data-deduplication%E2%80%94large-scale-study-and-system

Cheers,
Scott M. Johnson
Program Manager II
Data Deduplication Team

Getting started with SMI-S on Windows Server 2012

$
0
0

In my last blog, I discussed the industry standard Storage Management Initiative (SMI-S) in general terms. Microsoft started supporting SMI-S for storage provisioning in System Center 2012 Virtual Machine Manager, released earlier this year. With the upcoming Windows Server 2012, SMI-S support will be available to all of our server customers. Coupled with the new Storage Management API (SMAPI), which consists of new WMI interfaces and cmdlets, it is possible to manage SAN or direct attached storage in a vendor-independent fashion, and also in a system-independent fashion if you have more than Windows in your datacenter. The new File and Storage Services canvas in Server Manager can take advantage of SMI-S providers, giving you a GUI for managing basic array functionality right out of the box.

SMI-S is based on the DMTF CIM Model and Schema, and as such it can support very complex environments. The SMAPI simplifies that by hiding a lot of the details and relationships that go into the model. Many vendors took part in the formation of SMI-S. Each vendor has to create one or more SMI-S providers for their products and customers need to obtain the providers from their storage vendors. But first, let’s talk about getting SMI-S going on Windows Server 2012.

Adding SMI-S support to Windows Server 2012

The Windows Standards-Based Storage Management Service is the optional feature for Windows Server only, which communicates with SMI-S providers (in SMI-S speak, it is a “client”). I will refer to it below as the “Storage Service”. It is not installed by default, so you will need to add the feature by using either Server Manager or a PowerShell cmdlet.

The diagram below shows the full architecture of the new Storage Management infrastructure in Windows. This blog focuses mostly on the lower left area highlighted by the red box.

Figure 1 Architecture of Storage Management on Windows Server 2012

The Storage Service also providers other functionality such as extensive (and secure) caching of management objects surfaced by one or more SMI-S providers, handling of dynamic events through SMI-S indications, managing asynchronous tasks and secure credential management. [Windows also supports a different hardware provider model, known as Storage Management Provider (or SMP). This blog only addresses the SMI-S support, and the Storage Service translates between SMI-S and SMP for you.]

You should think carefully about where you want to install this service – in a datacenter you would centralize the management of your resources as much as practicable, and you don’t want to have too many points of device management all competing to control the storage devices. This can result in conflicting changes and possibly even data loss or corruption if too many users can manage the same arrays. Multiple levels of permissions can help mitigate this possibility. SMI-S providers also typically manage more than one hardware device so you don’t need to install one provider per array.

Using Server Manager

Server Manager has been redesigned from the ground up for Windows Server 2012. It is now a tool that can manage across many server instances. It is one way that you can install the Windows Standards-Based Storage Management Service.

Server Manager typically opens when you log in to the Windows Server as an administrator. All actions described below must be performed with administrator privilege (Windows may prompt to elevate privilege if necessary).

Figure 2 Server Manager Dashboard

Pick the server you wish to install on to begin the feature installation (here I chose the Local Server but you could install on any other server being managed). Click on Manage and select Add Roles and Features:

Figure 3 Add Roles and Features to the Local Server

Continue with the Wizard, selecting Next for the next three screens

Figure 4 Select Role-based or feature-based installation

Figure 5 Server Selection

Figure 6 Server Roles

Select Windows Standards-Based Storage Management and confirm the installation:

Figure 7 Server Features

Figure 8 Confirm

Click Install and wait for the Wizard to complete. You could close this window but it shouldn’t take long.

Figure 9 Feature installed

Using PowerShell to enable the storage service

Another (easier) way to install the service is using the Windows PowerShell Add-WindowsFeature cmdlet. Open an Administrative PowerShell prompt and enter

Add-WindowsFeature WindowsStorageManagementService

This will install and enable the service and add three new cmdlets to the system, but the service doesn’t know about any SMI-S providers yet. That will happen when you register providers.

Note: at the time this blog was first posted, cmdlet help was not yet online for the three SMI-S specific cmdlets (Register-SmisProvider, Unregister-SmisProvider and Search-SmisProvider) so you may see an error message when you attempt to Update-Help. The Search-SmisProvider cmdlet will be covered in a later topic; for now I will assume you know the name or IP address of the SMI-S providers you will be using.

SMI-S providers

In general, you will download these from your storage vendor. Each vendor has their own mechanism for distributing, licensing, installing and configuring providers, so it’s difficult to give you generic rules.

The steps are as follows:

  1. Download the SMI-S provider from your storage array vendor.
  2. Install the provider on a Windows or Linux server. It should not be installed on a server running any other SMI-S provider. It can be installed on the same system as the Storage Service but you should install SMI-S servers in as few places as necessary.
  3. Add firewall rules if you installed the provider on a Windows Server.
  4. Change or add account credentials for the provider, as appropriate.
  5. Make any changes to the provider’s properties (if necessary) and restart it.
  6. Add arrays to manage.

Registering Providers

In order to use an SMI-S provider, it must be registered. The register process will do the following:

1)      Save the provider information for reuse whenever the service or system restarts

2)      Save the credentials for the provider (securely!)

3)      Allow adding certificates to the proper store if you are using the recommended SSL (https) communication protocol

4)      Perform a basic discovery (level 0)

5)      Subscribe for indications – this will be the subject of a later post

To register a provider, use the Register-SmisProvider cmdlet:

Register-SmisProvider –ConnectionUri https://<name or IP address of provider>:<port>

A prompt for provider credentials will appear (you can also script this using a PSCredential). Although the storage service supports HTTP, you are encouraged to use HTTPS with the register cmdlet. This will ensure that the provider’s SSL certificate is properly configured on the machine running the storage service and will give the highest level of security for communications. It is also important for SMI-S indications, which are only delivered using HTTPS.

It’s worth noting that permissions for the provider are restricted to the user account you are currently logged in with on the machine running the Storage Service. You can give other users permission by specifying –AdditionalUsers with the Register-SmisProvider cmdlet.

At this point, if the register succeeds, only basic discovery information has been collected (provider and array details, also known as Level 0).

Get-StorageProvider

(Observe the Names returned; Storage Spaces will always be shown but that does not use SMI-S so I won’t discuss it here.)

Get-StorageSubSystem

Storage pools and volumes will not be discovered yet. To do a deeper discovery, execute this cmdlet:

Update-StorageProviderCache –Name <name from above> -DiscoveryLevel Level2

I want to reiterate that the Storage Service has an extensive, multi-level cache. Once objects are discovered, they can be operated upon efficiently. But beware: deep discoveries can take a lot of time, especially with high-end hardware which supports tens of thousands of objects! Each level is cumulative so a Level3 discovery also does levels 0-2. I plan to talk more about limiting the scope of the discovery in a more tree-structured approach in a later post.

To remove the provider and credentials from your system, use the Unregister-SmisProvider cmdlet.

Some other cmdlets

For any cmdlet, you can type -? to get the complete syntax and information about parameter sets or use any of the other PowerShell help features. You can also pipe the output to control formatting or for use as input to another cmdlet.

Some advanced functionality may be limited by the array hardware, the SMI-S provider, and of course, the features you licensed from the storage vendor. The cmdlets below are just a sample of the complete set available through the Storage Management API on Windows Server 2012.

Pools and Virtual Disks

Get-StoragePool

Shows the storage pools on discovered subsystems. You will have to increase the level of discovery using Update-StorageProviderCache before these appear since pools are Level1 objects.

New-StoragePool

Creates a new storage pool from available free physical disks (do a Level3 discovery first).

New-VirtualDisk

Creates a new virtual disk (aka storage volume in SMI-S).

Remove-VirtualDisk

Deletes a virtual disk (and the data on it).

New-VirtualDiskSnapshot

Creates a new writable snapshot of an existing virtual disk.

New-VirtualDiskClone

Creates a new writable clone (appears as a complete copy) of an existing virtual disk.

Masking Operations

Unmasking allows virtual disks to be seen by specific systems and their HBAs or iSCSI initiators. This is the key to sharing arrays with different hosts and allows large-scale storage to be used in a multi-computer environment. Masking is the reverse, hiding a virtual disk. Different arrays have different rules such as allowing virtual disks to participate in a limited number of masking sets, or allowing you control over which target ports can be specified in a set.

New-MaskingSet

Exposes virtual disks to specific initiators (port on a host).

Get-MaskingSet

Displays the discovered masking sets.

Add-InitiatorIdToMaskingSet

Adds additional initiators, where allowed by the hardware.

Add-TargetPortToMaskingSet

Adds additional target ports (some high end arrays support this).

Add-VirtualDiskToMaskingSet

Adds another virtual disk to an existing masking set.

Remove-InitiatorIdFromMaskingSet

Reverse of Add-InitiatorIdToMaskingSet.

Remove-MaskingSet

Removes the masking set, hiding the all the virtual disks from all hosts [it does not delete any virtual disks].

References

To see more about our commitment to standards and storage management, refer to http://blogs.technet.com/b/server-cloud/archive/2011/10/14/windows-server-8-standards-based-storage-management.aspx.

The Windows Server 2012 PowerShell Storage cmdlets are used with SMI-S providers as well as native SMP providers.

The previous blog in this series gives an overview of SMI-S.

Learn more about the SNIA’s Storage Management Initiative at http://snia.org/forums/smi.

Using the EMC SMI-S provider with Windows Server 2012 and SCVMM

$
0
0

All providers have slightly different installation procedures and characteristics. This information should help you get started using the EMC provider which can be downloaded from Powerlink (you will need an account). Make sure you download the latest version (currently 4.4). This information applies for use with either the Windows Server 2012 Windows Standards-Based Storage Management Service or System Center Virtual Machine Manager 2012 and the provider supports EMC Symmetrix VMAX, Clariion CX4 and VNX arrays. Please consult the EMC documentation for the appropriate array firmware levels and also for what platform the provider can be installed on.

Note that an SMI-S provider is assumed to be running somewhere other than the Windows Standards-Based Storage Management Service (I’ll call it Storage Service for short). Providers are either standalone server applications (known as proxy providers) or embedded in the array firmware. For EMC, the provider being discussed is a proxy provider so it needs to be installed on a running system with a supported version of Windows or Linux installed. Since vendors have not yet certified their SMI-S Providers to run on Windows Server 2012, this blog will discuss getting the EMC Provider running on Win2k8R2.

Download the EMC provider

The EMC SMI-S provider is a part of the “Solutions Enabler with SMI” package which you can download from Powerlink (requires registration); search for “SMI-S Provider” once you log in. There are provider versions available for Windows and Linux, and either can be used. Make sure you select the latest 32-bit or 64-bit version, and Windows or Linux version, as appropriate.

Install the provider

Installation is straightforward, just run the installer you downloaded as an administrative user. Use all the defaults and make sure you only select the “Array provider” as Windows does not use the Host providers and installing it may create conflicts with other software. We assume throughout this document that the provider is running on a different system than the Storage Service. It may be possible to install it on the same system once vendors support the Windows Server 2012 platform. Be aware that installing multiple providers may not be supported or may require additional configuration and non-standard port numbers.

Add firewall rules

If the provider runs on a Windows Server, you will need to configure the firewall to allow SMI-S and SLP traffic. Please do not turn off the firewall. The general rules below can be made stricter by eliminating HTTP support (port 5988) and by specifying the specific CIM Server application (ECOM) for ports 5988-5989, and SLP server (SLPD.exe) for port 427 as the process for the rules. You can also limit which systems can manage through the provdier by limiting the firewall to passing only traffic from those IP addresses. I am assuming the firewall is in its default state (blocks incoming/allows outgoing traffic).

These command lines must be run from an administrative account and will work for Windows Server 2008 R2. You can also use the firewall control panel, or the equivalent PowerShell cmdlet if the provider was installed on a Windows Server 2012 system.

netsh advfirewall firewall add rule name="SLP-udp" dir=in protocol=UDP localport=427 action=allow

netsh advfirewall firewall add rule name="CIM-XML in" dir=in protocol=TCP localport=5988-5989 action=allow

Change the default password and add additional users, if required

The EMC provider security can be configured through a webpage; open https://localhost:5989/ecomconfig. This is what you will see first:

At this point you have several options, but first a word about “self-signed” certificates. All SMI-S providers create or copy self-signed certificates to the system when they are first installed which means the certificate is not issued by a trusted party such as VeriSign. These certificates can be used “as-is” if that is consistent with your company’s security policies AND you trust the host where you installed the provider. You have the option to use more formally signed certificates, that is, certificates that “chain” to a trusted Certificate Authority or a CA. A full discussion of this can be found on the web. If you stay with the self-signed certificate, your options right now are to a) “Continue to this website” or b) change to using the fully-qualified domain name (FQDN) of the server instead of localhost, and add the certificate to the local certificate store which tells IE that you trusted this site. This only affects the use of the configuration page below; the storage service will prompt you for action when you register the SMI-S provider the first time.

Login with the default account (admin) and password (see the EMC documentation) and proceed to change the password, add an additional user or make any other changes to the security. Note the user name and password since you will need this when you register the provider for use with the Storage Service.

While you are here, there is one more change that we will need to make. Click on the Dynamic Settings link from the ECOM Administration Page and locate the setting for SSLClientAuthentication. Select None, check the Persist box, then click on Apply – this avoids a potential problem with SSL negotiations without lowering the security level. You will not need to restart the ECOM service if you modify parameters on this page.

 

Provider configuration changes for VMM

We need to adjust some settings for the EMC provider in order for it to work best with System Center 2012 Virtual Machine Manager. Navigate to C:\Program Files\EMC\ECIM\ECOM\conf and open the file Security_settings.xml with Notepad or another text editor.

 Change

<ECOMSetting Name="ExternalConnectionLimit" Type="uint32" Value="100"/>

to

<ECOMSetting Name="ExternalConnectionLimit" Type="uint32" Value="600"/>

 Change

<ECOMSetting Name="ExternalConnectionLimitPerHost" Type="uint32" Value="100"/>

to

<ECOMSetting Name="ExternalConnectionLimitPerHost" Type="uint32" Value="600"/>

Save the file and restart the provider. You can use the Services control panel, or from a command prompt:

net stop ecom

net start ecom

 

I also modify my PATH environment variable to include the EMC command line utilities:

set PATH=%PATH%;”C:\Program Files\EMC\SYMCLI\bin;C:\Program Files\EMC\ECIM\ECOM\bin”

(Or use the Advanced System Settings property page so this takes effect every time you open a command prompt.)

Adding arrays to manage

Depending on which EMC arrays you have, the process for managing them with SMI-S will be slightly different. The Symmetrix product line requires a direct, in-band connection using either Fibre Channel or iSCSI. This also requires creating “gatekeeper” LUNs on the array and unmasking them to the system where the provider is running and for Fibre Channel, configuring the zoning as well. Clariion and VNX can be managed either in-band or out-of-band using an Ethernet connection.

The EMC Provider Release Notes contains full information for adding arrays including zoning when you use Fibre Channel for inband management. See their Post-Installation Tasks for more information.

Indications

Indications are asynchronous events that come from the provider, informing a “listener” such as the storage service of events that may be of interest. I will discuss indication support in a later blog post. At that time, I will also discuss anything on the provider that you need to change or be aware of to support indications.

 

 

Understanding DFSR Dirty (Unexpected) Shutdown Recovery

$
0
0

Let me touch on an interesting topic in this blog post: “dirty shutdown” recovery of DFS Replication (DFSR). A TechNet blog post reviewing “What is new in Windows Server 2008” includes a good description of DFS Replication dirty shutdown recovery process. Related system event log entries, e.g. Event ID 2212, refer to the same event as “unexpected shutdown”, and I will also refer to it as unexpected shutdown for the rest of this blog post. This blog post enhances the existing description of unexpected shutdown, and adds new details about the current behavior as of Windows Server 2012.

Introduction

The DFS Replication service maintains state information pertaining to the contents of each replicated folder in a database on the volume that hosts the replicated folders. In this database, DFSR keeps track of file versions and other metadata that enables it to function as a multi-master file replication engine and to automatically resolve conflicts. The DFS Replication service is a consumer of the NTFS USN (Update Sequence Number) journal, which is a journal of updates to files and folders maintained by NTFS. Entries in this journal notify the DFS Replication service about changes occurring to the contents of a replicated folder. These notifications thus end up triggering replication activity. Every unique change occurring on the file system relating to a folder replicated by DFSR triggers the creation or update of a record in the DFSR database as well. DFSR also stores a “USN checkpoint” in the DFSR database to keep track of the last USN journal entry that it has consumed.

Sometimes, it is possible that the database and the file system get out of sync. Examples of such scenarios are abrupt power loss on the server or if the DFSR service was stopped abnormally for any reason. Another example is if the volume hosting a replicated folder loses its power, gets disconnected or is forced to dismount. These exception conditions result in unexpected shutdown of DFSR database, as any of these can cause inconsistencies between the database and the file system. DFSR is designed to automatically recover from these situations starting with Windows Server 2008, and this behavior continued through Windows Server 2008 R2.

Recent Changes

In January 2012, Microsoft released a hotfix for Windows Server 2008 R2 that made the following changes (this is now also the default behavior of Windows Server 2012):

1. Change the default unexpected shutdown handling policy from auto-recovery to manual-recovery, so the default behavior requires a manual user approval to go ahead with unexpected shutdown recovery. This was done to allow a user to take a backup of existing replicated folders on the volume before the recovery operation.

2. Support manually resuming the unexpected shutdown recovery and replication of the replicated folder(s) in a volume, using a WMI method. The command(*) to do that is:

wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="<volume-GUID>" call ResumeReplication

3. Support setting the default behavior back to automatic unexpected shutdown recovery, as in Windows Server 2008. The command for this:

wmic /namespace:\\root\microsoftdfs path dfsrmachineconfig set StopReplicationOnAutoRecovery=FALSE

(*) One way to retrieve <volume-GUID> is via: dfsradmin RF List /RgName:<Replication Group-Name> /Attr:All

Note however that the helpful new event log entry (Event ID: 2213) information includes the entire command line that you can simply copy-and-paste, e.g.:

To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig class. For example, from an elevated command prompt, type the following command:

wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="0D9806D1-AC1A-11E1-98C3-00155D4FBB00" call ResumeReplication

Clustering considerations

It turns out that the new recovery behavior also has an important implication for DFSR failover cluster deployments.

Let’s say in a 2-node cluster with nodes N1 and N2, you have set up a clustered file server ContosoFS and added a DFS replicated folder on that file server. This creates a DFS replicated folder resource, as part of the ContosoFS resource group – called a ‘clustered role’ in Windows Server 2012. For more details on DFSR clustering deployment, refer to Mahesh’s old blog post which is a pretty good read. At any given time, ContosoFS can be owned by only one node (say N1), which means DFS Replication service for the replicated folder also runs on N1.

Let’s say you move ContosoFS in a planned way to N2 – by using Failover Cluster Manager, or the Failover Cluster Windows PowerShell cmdlets. DFS Replication service also fails over to N2 in this case; however since this is a graceful failover, there is no DFSR unexpected shutdown recovery here. ContosoFS is now owned by N2.

Let’s say instead of a graceful failover, you now powered off N2. ContosoFS and the DFS Replication service failover to N1 as expected. However, note that this is an unplanned failover. In this case, DFS Replication Service detects an unexpected shutdown of the database and logs the new event 2213 cited above and then waits for a manual intervention (the new default behavior). So unless you monitor for the new event and resume the replication, your clustered DFS replicated folder is not highly available, because it will remain offline waiting for the manual initiation of the unexpected shutdown recovery operation. If instead you prefer the DFS replicated folder to auto-recover and be automatically highly-available in unplanned failover scenarios, i.e. you want the Windows Server 2008 behavior, you should change default behavior to perform auto-recovery on each one of the cluster nodes – N1 and N2 in this example – and restart DFS Replication service. This would be a one-time configuration step.

So how does unexpected shutdown recovery process work?

Let’s discuss how unexpected shutdown recovery process works, and specifically why one might not want the auto-recovery behavior.

When the DFS Replication service is asked to resume replication and perform unexpected shutdown recovery, either via auto-recovery or via manual intervention, it performs the following steps:

1) The first thing that DFSR does is to validate if the “USN checkpoint” in the database is valid by comparing the database against referenced USN record in the journal. If the checkpoint itself is invalid, each entry for each file and folder in all replicated folders on the volume is examined for correctness by comparing the entry to the corresponding file or folder on the volume. So this could take some time, depending on how many files are in the replicated folder(s). If, on the other hand, the checkpoint is valid, there is less need for cleanup – DFSR simply deletes the database entries after the last checkpoint because they are not reliable.

2) DFSR marks each one of the file and folder database records with “Initial Sync” fence value. Then it solicits information for all changes that may have happened, called “version vectors” in DFSR parlance, to the files in the replicated folder(s) on that affected volume from each of the replication partners. There are two possible outcomes in this phase:

a. If the hash value for the local file matches the value returned by a remote replication partner for the same file, it means that the local file version is correct. In that case, DFSR clears the “Initial Sync” fence value to “Default” in the local DFSR database for that file.

b. If the hash value for the local file does not match the value returned by a remote replication partner for the same file, it means that the local file version is not correct. The remote data is always considered authoritative in this case. So DFSR moves the local file to <ReplicatedFolderPath>\DfsrPrivate\ConflictAndDeleted folder, and installs the remote version of the file in its place.

3) At the end, there may still be some files and folders with the “Initial Sync” fence value. This is the subset of files that exists only on the local machine, but not known to the remote replication partners. DFSR moves this subset to <ReplicatedFolderPath>\DfsrPrivate\PreExisting folder. Finally, DFSR also cleans up entries in the local database that do not have valid hash values, and resets the DFSR volume management state out of unexpected shutdown.

Standard DFS replication mechanics resume at this point.

Let’s summarize the two most important resulting implications from the previous discussion:

a) The local copy of replicated folder data for the server going through unexpected shutdown recovery is never considered “authoritative”, remote data is considered more trustworthy wherever a local file version does not match that of a replication partner.

b) At the end of an unexpected shutdown recovery, a local file or a folder may end up in one of four states:

1. Left just where it was, if the local file or folder is identical to that on the remote replication partner, OR,

2. It may move to DFSR-private ‘Pre-existing’ folder, if the local file or folder does not exist on a remote replication partner, OR,

3. It may move to DFSR-private ‘Conflict And Deleted’ folder, if the local copy is different from that on the remote replication partner, OR,

4. It may move to DFSR-private ‘Conflict And Deleted’ folder and then get purged. This is due to the quota size and high watermark configured on the ‘Conflict And Deleted’ folder. The “least recently used” content in ‘Conflict And Deleted’ folder is purged when the high watermark is reached for the folder, until the folder size drops down to configured low watermark. The DFS Replication service enforces the configured size and the watermarks on this folder as a machine-local activity; this does not involve remote replication partners. You can read more about this in the TechNet article Staging folders and Conflict and Deleted folders.

These possible states (#2 through #4 above) are precisely the reason why the new Windows Server 2012 default behavior provides an opportunity for you to take a backup of the local data before unexpected shutdown recovery goes ahead. Depending on your application scenario, you are the best judge to determine if the local data is in fact accurate and therefore has business value.

Hope this discussion helped you understand DFS Replication, particularly the unexpected shutdown recovery mechanics, better.

Customizing the OEM Appliance OOBE in Windows Server 2012

$
0
0

Inside Windows Server 2012 there is a new feature called the OEM Appliance OOBE that enables OEMs and enterprise IT to rapidly deploy standalone servers or 2-node failover clusters. One of the design goals was to complete an entire deployment of a failover cluster in less than 30 minutes. The feature is integrated with the entire “out-of-box-experience” (OOBE) in Windows Server and adds an additional Windows Presentation Foundation (WPF)-based application, called Initial Configuration Tasks (ICT) that is a launch-pad full of tools that guide IT professionals through the tasks required to quickly deploy servers.

Here is what a 2-node Windows Storage Server cluster configuration might look like when deployed into a mixed environment: 

clip_image002

The ICT application is customizable by using an XML file (OEMOOBE.XML), and anyone could create an XML with the exact set of tools required to deploy a particular configuration. The OEM Appliance OOBE feature was first released in Windows Storage Server and now makes its debut in Windows Server 2012. It is included in Windows Server 2012 Standard and Windows Server 2012 Datacenter, and in both editions of Windows Storage Server 2012 (Standard and Workgroup). At the end of this blog I have included a script that I use to set up a basic installation of the OEM Appliance OOBE.

To install the new feature, you can run the following dism.exe command at an elevated command prompt (cmd.exe):   

dism /online /enable-feature /featurename:OEM-Appliance-OOBE

This will install the binaries into the \Windows\System32\OEMOOBE directory and set the application to start after first boot.

To build a customized operating system (OS) image using the OEMOOBE, you need these items:

  1. Installation of all the relevant roles for the deployment (Clustering, Hyper-V, File and Storage Services, Data Deduplication or whatever features are required)
  2. An unattended.xml file that includes:
  • An admin password and a login count = 1
  • A first-logon command that runs a network adapter-renaming Windows PowerShell command (renamenetworkconnection.ps1) that stamps the network adapter with friendly names based on PCI bus location. Customers enjoy seeing the port labels match the UI in the system. I recommend using a workload/color coding schema that matches the outside of the server; for example, I like using “Green – Public Network” inside the OS and having a green port on the back of the server. This makes it easy for users to quickly understand how to wire it together.
  • A NIC.config file that identifies the network adapters to be stamped by using the rename script mentioned above. The NIC.Config file looks something like the following. Note how I have identified the network adapter by PCI bus location and made user-friendly names for each interface. These strings will appear in the Windows networking control panel. Also notice how you can localize the entries for different markets by using different language ID tags:
           
    clip_image003
  • A customized OEMOOBE.xml file to define the ICT task list for a particular deployment: (example of the xml file follows)
    • Add custom tasks or remove un-needed ones
    • Add branding and deployment-specific software
    • Insert prescriptive content guidance for the specific storage configuration and wizards needed
    • Or create an entire appliance-specific section (known as a “Task Group”)
      • Links to configuration manuals and product information
      • Opportunities to purchase more storage
      • Links to OEM customer support
  • Resource files (.resx) will need to be updated if you are adding custom text. There are localizable versions available for all 19 languages that Windows Server 2012 supports.
  • After the OS image is installed, you can run Sysprep.exe /generalize to generalize the installation so that the image can be used on different servers. Immediately following the shutdown, you boot the reference computer by using a WinPE DVD image and capture the OS partition into an image file (creating a .WIM file) using dism.exe. After you have the image file, you can deploy the image by booting into WinPE on another similar system and use dism.exe to lay down the files on the boot volume. Alternatively, you could create a DVD image that can be installed by using setup.exe from the original OS media by making a copy of the media and replacing the install.wim file (located in the \Sources folder). OEMs usually license and use WinRE for their Windows recovery image, which is WinPE plus additional recovery tools.

    After the image is deployed to a target system, it is ready to ship. The end user will open the box, “rack it and stack it,” and then boot the system(s).

    The first set of questions include a handful of screens called “Windows Welcome”

    • Product Key (if one was not specified in the unattend.xml)
    • Region and language preferences
    • Keyboard layout
    • EULA acceptance (which could include both Microsoft and an OEM-specific EULA)
    • The settings are pushed down to the other node, and after the system boots into Windows, the configuration application appears.

    clip_image004


    The Initial Configuration Task (ICT) application guides users to perform these tasks

    1. Activate Windows (OEMs usually pre-activate server appliances and remove this task).
    2. Set the time zone and current time
    3. Configure network interfaces and IP addresses
    • When in a cluster profile, the ICT will display the networking adapters and configuration UI for both systems.
  • Domain Join Wizard:
    • Create the cluster management name (if using the cluster profile)
    • Set computer name(s). In a cluster profile the default adds “-N1” and “-N2” to the cluster management name
    • Join the domain
    • Change the local administrator password(s)
    • Add domain user(s) to the administrators group
  • Enable Automatic Updates
  • Turn on Windows Error Reporting
  • Join the Customer Experience Improvement Program
  • Storage Provisioning:   There are 4 different links that can be used to streamline storage provisioning. OEM OOBE deployment designers should select only the tools they need and hide the other tasks.
    1. Create iSCSI connections by using the built-in iSCSI Initiator. These tasks are enabled by default in the cluster profile. This is for deployments that have an iSCSI Target for the backend storage.
    2. The Create a Storage Pool Wizard enables storage arrays that support SMP or SMI-S, or a simple JBOD can be used to create a virtualized storage layer by using the new Storage Spaces feature.
    3. The Create Virtual Disks and Volumes Wizard walks users through virtual disk creation and immediately goes into partitioning and formatting volumes. This wizard should only be used if you have a storage subsystem that supports creating storage pools in Windows as outlined above.
    4. The Create Volumes Wizard goes directly into volume creation and formatting. This wizard entry point is especially useful if you are using traditional RAID and not an SMP/SMI-S or Storage Spaces pool.
  • Cluster validation and creation:
    • This task will verify that you have an appropriate shared storage volume to be used as a disk witness for quorum voting.
    • The cluster is correctly cabled, and the shared storage supports persistent reservations and can survive a failover event.
    • The wizard will create the cluster and configure both nodes.
  • Cluster Aware Updating (CAU) enables a new cluster patching service that keeps the cluster updated without ever letting it go down by intelligently patching systems and monitoring progress while orchestrating the update process for all nodes.
  •  

    Two-node clusters

    Following is an example of a customized ICT experience for a two-node cluster that uses an SMP or SMI-S provider that supports RAID 6 and thin provisioning in the storage array. Note the following customizations:

      • Contoso logo at the top and Contoso NAS in the header text
      • Customized Contoso section with registration, links to make storage purchases and product documentation
      • Custom storage provisioning section with prescriptive guidance for how to configure the storage.
      • I removed the iSCSI Initiator links because I am not using an iSCSI array in this example.

    clip_image005

    Customization sample

    To create the customized section included in the preceding picture named “Contoso NAS Registration and Product Information,” I added a little bit of XML to two files to include a new task group and three tasks in the group:

    image 

    image

    Now you know how easy it is to customize the XML to create special sections in the OEM Appliance OOBE. If you are doing a localized deployment, you would add localized strings to each of the locale-specific XML files, such as OEMOOBE.zh-CN.resx if you want to support Chinese Traditional.

    How it works for the customer

    When identical images are loaded into two systems that will be used for a cluster, the OEM Appliance OOBE sets up the machines so that either system can be used as the ‘first node.’ After the user starts the process of configuring the first node, you can configure all the settings from that console and you never have to visit the second node.

    When both nodes of the cluster first power up:

    1. The boot loader makes a call into the normal OOBE, which is intercepted by the OEM Appliance OOBE, and the networking stack is enabled so that the servers can communicate. The regular Windows Welcome screen is then displayed on both systems.
    2. After a node is selected by the user, they choose region, language, and keyboard layouts and accept the EULA as part of the Windows Welcome UI stage. We use automation to capture all the settings to an XML file and copy it to the other node of the cluster.
    3. The OOBE then launches a discovery operation to find the IP address of the second node over the internal network that was identified by using the NIC.Config file where the interface has the isClusterPrivate="true" tag attached to the NIC.
    4. After the IP address is identified, the selections made by the user are pushed down to the second node and registry keys are set so the nodes remember where they are in the setup process.
    5. After the other node of the cluster is found and set up, the systems boot into Windows and the ICT is displayed on both nodes:
    • Node 1: The ICT indicates that the cluster nodes are connected and ready to be configured.
    • Node 2: The ICT tells the user to go back to the first node to finish configuring the cluster.


    During the ICT we use several technologies to make it all work:

    1. When you configure the networking adapters or iSCSI initiators, we use RemoteApp technology to open the iSCSI initiator UI or the networking control panel (ncpa.cpl) directly on the second node.
    2. When changing the time zone or other global settings, we use Windows PowerShell remoting to synchronize both nodes.
    3. When launching storage provisioning wizards, both system names are passed into the wizards so they can see and create shared storage for use by the cluster.
    4. When launching the cluster validation wizard, we add an additional verification that there is a quorum disk setup so that the two-node cluster is configured to use a witness disk.

    Sample script I use to configure a two-node cluster setup by using the OEM Appliance OOBE feature:
    This script can be used as a template to start the process for new deployments. After you customize the preceding files, add them to the installation before your final sysprep command.

    InstallOOBE.BAT

    REM (Enable OEMOOBE and setup a 2-node cluster profile)

    REM ****enable WinRM

    powershell.exe;powershell.exe -command {Set-WSManQuickConfig -Force}

    REM ****enable powershell remoting

    powershell.exe;powershell.exe -command {Set-ExecutionPolicy RemoteSigned}

    REM ****Install failover clustering****

    powershell.exe;powershell.exe -command {;Add-WindowsFeature Failover-Clustering -IncludeManagementTools}

    echo %errorlevel%

    echo Failover clustering feature install complete

    REM ****Install File Services****

    powershell.exe;powershell.exe -command {Add-WindowsFeature File-Services}

    echo %errorlevel%

    echo File-Services feature install complete

    REM **** Set registry keys for automatic discovery: Password must match the password used in the unattend file.

    reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\OEMOOBE" /V RunDiscovery /t REG_SZ /d 1

    reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\OEMOOBE" /V Password /t REG_SZ /d abc_123!

    REM ****Suppress HA File Server à Removes the automatic creation of an HA file-server and hides the check-box in the wizard.

    REM reg add "HKLM\Software\Microsoft\OEMOOBE" /v SuppressHAFileServer /t REG_SZ /d 1 /f

    REM ****Install Remote Desktop and enable remote management from any version of Windows

    netsh advfirewall firewall set rule group="remote desktop" new enable=Yes

    reg add "HKLM\System\CurrentControlSet\Control\Terminal Server" /v fDenyTSConnections /t REG_DWORD /d 0 /f

    reg add "HKLM\System\ControlSet001\Control\Terminal Server\WinStations\RDP-TCP" /v UserAuthentication /t REG_DWORD /d 0 /f

    REM ****Install the OEM-Appliance-OOBE feature

    dism /online /enable-feature /featurename:OEM-Appliance-OOBE

    REM ****Copy in custom unattend file.

    copy d:\unattend.xml c:\windows\system32\sysprep

    REM ****Sysprep

    %windir%\System32\Sysprep\sysprep.exe /oobe /generalize /reboot /unattend:c:\windows\system32\sysprep\unattend.xml

    Exit /b

     

    Cheers,
    Scott M. Johnson
    Program Manager
    Windows File Server Team


    DFS Namespace Scalability Considerations

    $
    0
    0

    In this blog post, let me touch on a topic that I frequently get questions on from customers. Most namespace scalability questions are along one of the following lines:

    • I want to put x number of DFS folders in one DFS namespace, does this work?
    • Does this matter if I put these x DFS folders into one namespace versus into multiple?
    • What factors should I consider in designing a scalable DFS namespace?

    When you start thinking about scalability of your DFS Namespace with the desired number of DFS folders, you should consider at least the following four critical questions:

      1. Is my namespace server resourced to do a full sync on the namespaces without bringing itself to knees?
      2. Can my DFS namespace keep up with the throughput/frequency of operations performed by management applications?
      3. Can my clustered DFS namespace failover quickly enough? (this applies only to stand-alone namespaces)
      4. Can my namespace server come up quickly enough after a reboot?

      Let us discuss each one of the questions in turn. I suspect most of you reading this blog post have probably used DFS namespaces for a while, so the namespace concepts and terms in the following discussion should be very familiar. In case you need to refresh your memory, Overview of DFS Namespaces is a good one to refer to. In the rest of the blog post, unless otherwise stated, I will use the term “DFS folder” to represent a DFS folder with one or more folder targets (as opposed to a DFS Folder that has no folder targets of its own, but which simply acts as a container for other DFS Folders).

      1. Is my namespace server resourced to do a full sync on the namespaces?

      What is Full sync? First, let us briefly review key related concepts. The DFSN service performs two kinds of “sync” operations for a domain-based namespace: Full sync and Periodic sync. For a stand-alone namespace, the DFSN service performs just a Full sync. The objective behind any sync operation is to synchronize the “working” namespace metadata on the DFS namespace server with the most recent authoritative information. The “working” metadata includes both the in-memory caches and the on-disk folder structure on the root SMB file share.

      Let’s look at the easier one first. For domain-based namespaces, Periodic sync – also known as the hourly sync as that’s the default – looks for and performs delta sync for changed portions of the namespace metadata from the DC. Periodic sync is usually a faster operation: when there are no changes, the periodic sync is effectively a “no op”.

      A Full sync may happen by itself under certain conditions such as when a management operation on the namespace (e.g. add a folder path) fails and the DFSN service decides that it is best to fully synchronize its working data set with the latest namespace metadata. This may be from the Active Directory domain controller (DC) for domain-based namespaces, and local registry for stand-alone namespaces.

      Full sync by its very nature is a potentially time-taking and resource-intensive process because it synchronizes all the namespace metadata from the DC. While a Full sync is in progress, all other management operations on the namespace stall awaiting the Full sync to complete. A Full sync operation causes the DFSN service to also (re)create all the reparse points for each one of the DFS folders on the SMB root file share, and re-apply all the previously-configured ACLs on the reparse points. So a Full sync can cause spikes in network, CPU, memory and disk resource usage on the namespace server. In a large number of instances, DCs tend to double up also as namespace servers, so you should carefully monitor and understand these resource usage spikes before you go into production. Windows Server 2012 includes a key performance optimization in that it does not recreate reparse points if valid reparse points already exist.

      How do I confirm server is sized for Full sync? The best way to confirm that your namespace server is properly resourced is to kick off the Full sync manually using the command "dfsutil root forcesync \\RootServer1.Contoso.com\PublicDocs" and monitoring the DFS performance counters (see discussion about performance counter specifics a little further down in this blog post), and Resource Monitor to monitor disk, CPU and network usage. Note that this command works only on stand-alone namespaces and Windows Server 2008 mode domain-based namespaces. Notice also that I am forcing a Full sync not on the namespace, but on the desired namespace root server. Since there can be multiple root servers for a namespace, you have to specify the specific root server that you want to fully synchronize.

      In Windows Server 2012, the DFSN service now logs events to the ‘Applications and Services Logs\Microsoft\Windows\DFSN-Server\Admin’ event channel both at the time a Full sync is initiated (Event ID: 516), and at the time the Full sync is completed (Event ID: 517).

      Does my DC scale for DFSN Full sync? One interesting knob to consider in thinking about Full sync is root scalability for domain-based namespaces. When you enable it through "dfsutil property rootscalability Enable \\Contoso.com\PublicDocs", DFSN will access the nearest DC instead of the Primary Domain Controller Emulator (PDCE). Additionally when this property is enabled, namespace servers do not send change notifications to other namespace servers. Root scalability is thus a good option if the namespace is relatively static. Notice further that unlike “forcesync”, I am enabling root scalability on the namespace as a whole, not on a single root server. Enabling this mode has the potential to significantly decrease the resource usage on PDCE to support DFSN Full sync operations. Root scalability is designed for usage with a large number of namespace servers, and is especially attractive when the namespace server is running right on a DC itself. However, you should be aware of potential for transient stale data in using this. When you make a namespace change, your nearest namespace server sends these namespace metadata changes on to PDCE via LDAP calls. Then the changes get replicated out to other DCs via AD replication mechanisms. When the local DC has not yet fully synchronized with PDCE, there’s the small window for stale data.

      2. Can my DFS namespace keep up with the throughput/frequency of operations performed by management applications?

      Management applications? First off, let me elaborate on what I mean by “management applications” here. DFS Namespaces feature ships with a set of management tools in Windows, e.g. dfsutil, dfscmd, dfsdiag, DFS Management UI, DFS BPA. We have also added a set of new DFS Namespaces Windows PowerShell cmdlets in Windows Server 2012. Additionally, System Center Operations Manager (SCOM) also ships a File Services Management Pack that includes extensive DFS Namespaces monitoring capabilities. All these management tools consume a set of “NetDfs*” APIs under the covers that directly or indirectly can generate management traffic on a namespace server. So you want to size your namespace server to keep up with this management operation traffic.

      Let us look at a couple of related API considerations. NetDfs APIs cover a gamut of capabilities including add, delete, update, and enumerate operations of various types. Compared to the DFSN service design in Windows Server 2008 R2 which serializes a portion of all these management operations without distinction, the DFSN service design in Windows Server 2012 lends itself to better scalability and concurrency in tasks requiring read operations (e.g. NetDfsGetInfo, NetDfsEnum).

      How do I select the namespace server that I want to test? If your DFSN deployment tends to be relatively static – i.e. you do not add/change/delete your namespaces, or namespace folders or their folder targets all that often – chances are that your namespace server is just fine handling the light management operations traffic. If you tend to make a significant number of namespace metadata changes on a daily or periodic basic, it is very important for you to simulate that burst of namespace activity and make sure that your namespace server is holding up well under that load. For domain-based namespaces with multiple namespace servers, it becomes a little tricky to identify or choose the specific namespace server that will service the management operations initiated by a client computer, as the best root target is automatically chosen by DFSN client based on site-costing and other factors. Fortunately however, dfsutil command supports just the required functionality:

      Look up the current “active” root target in the client-local cache for the desired namespace root.

      PS C:\Windows\system32> dfsutil cache referral

      1 entry...

      Entry: \Contoso.com\Public

      ShortEntry: \Contoso.com\Public

      Expires in 0 seconds

      UseCount: 0 Type:0x81 ( REFERRAL_SVC DFS )

      0:[\Server1\Public] AccessStatus: 0xc00000be ( TARGETSET )

      1:[\Server2\Public] AccessStatus: 0 ( ACTIVE )

      In this example, if Server2 is what you want to test, you are already set as it is the current active target in the DFSN client cache. Go ahead with generating that burst of namespace activity. Let’s say you instead want to test Server1. You can change the active target to Server1 using the following:

      PS C:\Windows\system32> dfsutil client property state active \\Contoso.com\Public \\Server1\Public

      Done processing this command.

      Alternatively, you can also set the active server by selecting the desired server and clicking “Set Active” button under the “DFS” tab in the File Explorer properties window for the DFS folder. In either case, you can confirm that the desired namespace server is selected by checking out the updated contents of cache.

      PS C:\Windows\system32> dfsutil cache referral

      1 entry...

      Entry: \Contoso.com\Public

      ShortEntry: \Contoso.com\Public

      Expires in 0 seconds

      UseCount: 0 Type:0x81 ( REFERRAL_SVC DFS )

      0:[\Server1\Public] AccessStatus: 0xc00000be ( ACTIVE TARGETSET )

      1:[\Server2\Public] AccessStatus: 0

      How do I confirm the server is “holding up”? The DFSN service in Windows Server 2012 provides the following DFS Performance counter sets;

      1. DFS Namespace Service API Requests. Shows performance information about requests (such as creating a namespace) made to the DFS Namespace service.

      2. DFS Namespace Service Referrals. Shows performance information about various referral requests that are processed by the DFS Namespace service.

      While management operations or Full sync is in progress, you should confirm that the namespace server is cranking through its work queues and being responsive – you can do this by confirming the following:

      a) DFS Namespace Service API Requests - Requests Processed/sec - <All instances> counter should hold steady, or ramp up, and,

      b) DFS Namespace Service API Requests – Requests Processed – - <All instances> counter should steadily increase with time.

      Note: In previous releases of Windows Server, you see an additional performance counter set:

      3. DFS Namespace Service API Queue. Shows the number of requests (made using the NetDfs API) in the queue for the DFS Namespace service to process.

      This indicates the number of RPC threads waiting in queue to acquire internal locks. This lock contention has been eliminated in Windows Server 2012 – yet another reason to upgrade! –so this counter set does not exist in Windows Server 2012. When working with previous releases of Windows Server, confirm that this queue is not always increasing while at the peak level of activity.

      3. Can my clustered stand-alone DFS namespace failover quickly enough?

      Do I need a stand-alone namespace? First off, I would strongly recommend you use domain-based namespaces so you do not have to worry about this aspect of scalability. For a domain-based namespace, multiple namespace servers can concurrently be “active”, so any failure of one namespace server would cause a smooth failover to another namespace server with virtually no downtime from a DFSN client perspective.

      How do I test and what do I tweak? Assuming that you have to use a stand-alone namespace for a strong reason, you will then need to use a clustered namespace server for high availability of DFS namespaces. One important consideration is to confirm that on the failure of the primary, the namespaces can rapidly failover to a secondary node in the failover cluster. Failover time of a namespace is directly proportional to the number of DFS folders – in this blog post, I specifically mean the DFS folders with folder targets – in the namespace. The cluster node taking over the namespace ownership sets the last modified time attribute time and the ACLs on each of the DFS folders, which are NTFS reparse points under the covers – see How DFS Works to learn about how DFSN uses reparse points. Prior to Windows Server 2012, a failover would always cause the all the reparse points to be re-created. As stated earlier, Windows Server 2012 includes an optimization to not create reparse points if valid ones already exist. This can take a non-trivial time potentially running into several minutes for very large namespaces (e.g. tens of thousands of folder targets). Refer the related Microsoft TechNet guidance on the recommended maximum number of DFS folders for a namespace. However, factors such as server performance, desired failover latency and the number of ACLs would all influence the practical count in the namespace. The biggest positive impact on DFSN failover latency is most likely realized in practice by using solid state disks (SSD) for the volume hosting the namespace root share.

      4. Can my namespace server come up quickly enough after a reboot?

      Should I worry about cold start latency? The answer to this question is substantially similar to the question#3. After a reboot, the DFSN service re-creates the referral folder structure on the SMB Share hosting the namespace. This includes creating the DFS folders with folder targets, and applying any ACLs and Access-based Enumeration (ABE) settings on the reparse points. So this process also is inherently time-consuming. Note that unlike the failover latency, which is specific to stand-alone namespaces, the cold start latency is a key consideration for both domain-based and stand-alone DFS namespaces – albeit a bit less so for domain-based as you likely have other namespace servers in that case to provide referrals when one is rebooting. For either type of namespace, the key to remember is that the longer it takes for the DFSN service cold start, the longer your DFS namespace would have one less namespace server to provide referrals. And that would have a direct bearing on the high availability of the namespace. Based on the previous discussion, not surprisingly, number of DFS folders plays a big role in the cold start latency.

      What is the Microsoft guidance and data? Multi-core CPUs and SSD drives offer the best chance to accelerate DFSN cold start process. This is again something you do want to test in your staging deployment up front, as a lot of factors can influence the cold start latency. Jose, a previous DFSN PM, had published a detailed blog post with nice charts on the DFSN service cold start latencies relative to number of links for Windows Server 2008 R2 – under a set of environmental assumptions. While the core of that discussion applies just as well to Windows Server 2012, we have made some nice performance improvements in this release around parallelizing management operations within a single namespace. Once I have Windows Server 2012 measurement data, I will share the results in a future blog post.

      Finally, let me wrap this DFSN scalability discussion with a reminder about DFSN compatibility with the new Scale-Out clustered file server in Windows Server 2012. DFSN folder targets can be SMB shares on a Scale-Out File Server; however the namespace root cannot be hosted on a Scale-Out File Server.

      Hope this discussion added to your understanding of DFS Namespaces, especially regarding their scalability considerations.

      Useful pointers:

      Server for NFS in Windows Server 2012

      $
      0
      0

      In this introductory NFS blog post, let me provide you with an overview of Server for NFS feature implementation in Windows Server 2012.   Native NFS support with Windows Server started with Windows Server 2003 R2 and has evolved over time with continuous enhancement in terms of functionality, performance, and manageability.

       

      Windows Server 2012 takes the support for the Server for NFS feature to a new level. The following are some of the highlights:

       

      1.  NFSv4.1 support : Support for the latest NFS version 4.1 is one of the major highlights with Windows Server 2012. All the mandatory aspects of RFC 5661 are implemented. NFSv4.1 protocol provides enhanced security, performance, and interoperability over NFSv3.

      2.  Out of the box performance : By utilizing the new native RPC-XDR transport infrastructure, optimal NFS performance can be expected right out of the box without having to tune any parameters. This feature provides auto-tuned cache and thread pools along with dynamic resource management based on the workload. Failover paths within NFS server have been tuned for better performance.

      3.  Easier deployment and manageability : Improvements are made on many fronts in terms of ease of deployment and manageability. To name a few:

      a.       40+ Windows PowerShell cmdlets for easier NFS configuration,  management of shares

      b.      Better identity mapping with local flat file mapping store and Windows PowerShell cmdlets

      c.       Simpler graphical user interface

      d.      New WMI v2 provider

      e.      RPC port multiplexer (port 2049) for firewall-friendliness

      4.   NFSv3 availability improvements :  Fast failovers with new per-physical disk resource and tuned failover paths within NFS server. Network Status Monitor (NSM) notifications are sent out after a failover so that clients no longer need to wait for TCP timeouts to reconnect. This essentially means that NFSv3 clients can have fast and transparent failovers with more uptime (reduced downtime).

       

      In summary, Windows Server 2012 delivers improvements in terms of ease of deployment, scalability,  stability, availability, reliability, security and interoperability. Shares can be simultaneously accessed over SMB & NFS protocols. All these allow you to deploy Windows Server 2012 as a file server or a storage server in any demanding cross-platform environments.

       

      We will be following-up with a number of detailed blog posts addressing the above listed features, which highlight Server for NFS feature and interoperability scenarios in Windows Server. So, stay tuned..

       

      By the way, you can reach NFS/Windows related blogs using the URL http://aka.ms/nfs

      Data Classification Toolkit for Windows Server 2012 Now Available

      $
      0
      0

      Get the most out of Windows Server 2012 with new features that help you to quickly identify, classify, and protect data in your private cloud!

      The Data Classification Toolkit supports new Windows Server 2012 features, Dynamic Access Control, and backward compatibility with the functionality in the previous version of the toolkit. The toolkit provides support for configuring data compliance on file servers running Windows Server 2012 and Windows Server 2008 R2 SP1 to help automate the file classification process, and make file management more efficient in your organization.

      Download the Data Classification Toolkit for Windows Server 2012

       

      iSCSI Target Storage (VDS/VSS) Provider

      $
      0
      0

      There are two new features introduced in Windows Server 2012 related to iSCSI: iSCSI Target Server and Storage Provider. My previous blog post discussed the iSCSI Target Server in depth, this blog post will focus on the providers.

      Introduction

      There are two providers included in the storage provider feature:

      · iSCSI Target VDS hardware provider: Enables you to manage iSCSI virtual disks using older applications that require a Virtual Disk Service (VDS) hardware provider, such as the Diskraid command.

      · iSCSI Target VSS hardware provider: Enables applications, such as a backup application that are connected to an iSCSI target to create volume shadow copies of data on iSCSI virtual disks.

      Note: VDS and VSS utilize different provider models. iSCSI Target storage providers follows the hardware provider model, hence the naming convention.

      Overview

      VSS hardware provider

      The VSS (Volume Shadow Copy Service) is a framework that allows a volume backup to be performed while the application continues with IOs. VSS coordinates among the backup application (requester), application (such as SQL) (writers) and the storage system (provider) to complete the application consistent snapshot. A detailed concept of VSS is explained here.

      The iSCSI Target VSS hardware provider communicates with the iSCSI Target server during the VSS snapshot process; thus ensuring the snapshot to be application consistent. The diagram below illustrates the relationship between the VSS components.

      image

      One feature added in Windows Server 2012 for the iSCSI Target VSS hardware provider is the support of auto-recovery. Auto-recovery allows the VSS writer to modify the snapshot in the post-snapshot phase. This requires the provider to support write operation in the post-snapshot window, before making the snapshot read-only. This feature is also required by the Hyper-V host writer. With auto-recovery support, you can now run Hyper-V host backup against iSCSI Target Storage.

      VDS hardware provider

      The VDS (Virtual Disk Service) manages a wide range of storage configuration. In the context with iSCSI Target, it allows storage management applications to manage iSCSI Target Server. For the service architecture, this page provides more the details, along with the roles of the providers. The diagram below illustrates the relationships between the VDS components.

      image

      As you may have read in the VDS overview for Windows Server 2012, the VDS service is superseded by the Storage Management APIs, the VDS hardware provider is included for backward compatibility. If you have storage management applications requires VDS, you will be able to continue to run the application. For new application development however, it is recommended to use Windows Storage Management APIs. Note, iSCSI Target in Server 2012 only support VDS.

      To use the VDS hardware provider to manage iSCSI Target Server, you must install the VDS provider on the storage management server. You also need to configure the provider so that, it knows which iSCSI Target Server to manage. To do so, you can use the powershell cmdlet below to add the server:

      PS C:\> $PrvdSubsystemPath = New-Object System.Management.ManagementPath("root\wmi:WT_iSCSIStorageSubsystem")

      PS C:\> $PrvdSubsystemClass = New-Object System.Management.ManagementClass($PrvdSubsystemPath)

      PS C:\> $PrvdSubsystemClass.AddStorageSubsystem("<remote-machine>")

      Installation

      The iSCSI Target storage providers are typically installed on the application server, as illustrated by the diagram below:

      image

      Windows Server 2012

      To install the storage providers on Windows Server 2012, use Server Manager, you can run Add roles and features wizard, and then select the iSCSI Target Storage Provider (VDS/VSS hardware provider)

      clip_image008

      Alternatively, you can also enable it from the cmdlet

      PS C:\> Add-WindowsFeature iSCSITarget-VSS-VDS

      Down Level Operating System support

      As you can see from the diagram above, the iSCSI storage providers are typically installed on a different server from the server running iSCSI Target Server. If iSCSI Target Server is running on Windows Server 2012, and the application server is running a previously-released Windows operating system, you will need to download and install the down-level storage providers. The download package is available on the Download Center at http://www.microsoft.com/en-us/download/details.aspx?id=34759

      There are three files to choose from:

      · 64 bit package that runs on Windows Server 2008 (Windows6.0-KB2652137-x64.msu)

      · 32 bit package that runs on Windows Server 2008 (Windows6.0-KB2652137-x86.msu)

      · 64 bit package that runs on Windows Server 2008 R2 (Windows6.1-KB2652137-x64.msu)

      If you have any application server running on Windows Server 2008 or R2, connected to Server 2012 iSCSI Target, you will need to download the appropriate package, and install it on the application server, and configure the credentials as described in the Credential configuration section. You can simply follow the installation wizard.

      Storage Provider support matrix

      To complete the picture of storage provider and iSCSI Target version support, see the table below:

      clip_image009

      iSCSI Target 3.2 <-> installed on Windows Storage Server 2008

      iSCSI Target 3.3 <-> installed on Windows Storage Server 2008 R2 and Windows Server 2008 R2

      iSCSI Target (build-in) <-> included with Windows Server 2012

      Note:

      1: Storage provider 3.3 on Server 2012 can manage iSCSI Target 3.2. This has been tested.

      2: The Windows Server 2012 down-level storage provider can be downloaded from: : http://www.microsoft.com/en-us/download/details.aspx?id=34759

      Credential configuration

      If you have used the storage providers prior to the Windows Server 2012 release, there are a few differences in this release to consider:

      1. The interface between the storage providers and the iSCSI Target service has changed from private DCOM to WMI, therefore the storage providers shipped previously cannot connect to iSCSI Target Server in Server 2012. See the support matrix to check the version of the storage provider you may need.

      2. The storage providers require credential configuration after being enabled.

      The storage providers must be configured to run with the administrative credentials of the iSCSI Target Server computer, otherwise, you will run into “Unexpected Provider” error (0x8004230F) when taking any snapshot. Along with the error, you will also find the following error message in the Windows Event eventlog:

      Volume Shadow Copy Service error: Error creating the Shadow Copy Provider COM class with CLSID {463948d2-035d-4d1d-9bfc-473fece07dab} [0x80070005, Access is denied.].

      Operation:

         Creating instance of hardware provider

         Obtain a callable interface for this provider

         List interfaces for all providers supporting this context

         Query Shadow Copies

      Context:

         Provider ID: {3f900f90-00e9-440e-873a-96ca5eb079e5}

         Provider ID: {3f900f90-00e9-440e-873a-96ca5eb079e5}

         Class ID: {463948d2-035d-4d1d-9bfc-473fece07dab}

         Snapshot Context: -1

         Snapshot Context: -1

         Execution Context: Coordinator

      What credential to use?

      For the storage providers to remotely communicate with the iSCSI Target service, they require the local Administrator’s permission of the iSCSI Target Server computer. This may also be a domain user added to the local admin’s group on the iSCSI Target Server.

      To find the account with local Administrator’s permission, you can open the computer management, and click on the Administrator’s group:

      clip_image010

      For non-domain-joined servers, use a mirrored local account, i.e. create a local account with the same user name and password on both the iSCSI Target Server and the application server. Make sure the account is in the Administrator’s group on both servers.

      UI Configuration (Dcom Config)

      You can also use DCOM Config to configure the credentials as follows:

      1. Open Component Services, open Computers, open My Computer and then open DCOM Config.

      2. Locate 'WTVdsProv' and configure credentials as appropriate

      3. Locate 'WTSnapshotProvider and configure credentials as appropriate

      Take the WTSnapshotProvider for example:

      1. Locate the provider under DCOM Config container

      clip_image011

      2. Right click on the provider, and click Properties

      clip_image012

      3. Click the Identity tab, select the This user option, then specify an account which has the iSCSI Target Server local Administrator’s permission

      clip_image013

      Cmdlet Configuration

      As an alternative, you can also use powershell cmdlet to configure the credentials:

      PS C:\> $PsCred = Get-Credential

      PS C:\> $PrvdIdentityPath = New-Object System.Management.ManagementPath("root\wmi:WT_iSCSIStorageProviderIdentity")

      PS C:\> $PrvdIdentityClass = New-Object System.Management.ManagementClass($PrvdIdentityPath)

      PS C:\> $PrvdIdentityClass.SetProviderIdentity("{88155B26-CE61-42FB-AF31-E024897ADEBF}",$PsCred.UserName,$PsCred.GetNetworkCredential().Password)

      PS C:\> $PrvdIdentityClass.SetProviderIdentity("{9D884A48-0FB0-4833-AB70-A19405D58616}",$PsCred.UserName,$PsCred.GetNetworkCredential().Password)

      Credential verification

      After you have configured the credentials, to verify, you can try to take a snapshot using diskshadow.exe.

      Open a commandline prompt, and type diskshadow

      Follow the prompt and type

      Add volume c:

      Create

      If the credential is not configured correctly, it will show the following error:

      clip_image014

      Note: Remember to change the credential if the password has changed on the iSCSI Target Server for the specified account.

      Conclusion

      I hope this helps you get started using iSCSI Target in Windows Server 2012, or make a smoother transition from the previous user experience. If you have questions not covered, please raise it in the comments, so I can address it with upcoming postings.

      NFS Identity Mapping in Windows Server 2012

      $
      0
      0
      This document describes the selection, configuration and usage of the user and group identity mapping options available to Client for NFS available in selected versions of Windows 8 and to Server for NFS and Client for NFS available in selected versions of in Windows Server 2012 to assist an systems administrator when installing and configuring the NFS components within Windows 8 and Windows Server 2012.  It describes the available mapping mechanisms and provides guidance on which of those mechanisms to use in common scenarios. Here’s a summary of the items on this post:
      • ID & ID Mapping
      • ID Mapping with RPCSEC_GSS
      • ID Mapping Mechanisms
      • Setting a Mapping for an Identity
      • How to Select a Mapping Method
      • Troubleshooting

      Identity and Identity Mapping

      The methods Windows and NFS use to represent user and group identities are different and are not necessarily directly interchangeable. Identity Mapping is the process of converting from an NFS identity representation to a Windows representation and vice-versa. The following sections briefly describe some representations of identity and then how they are used by the NFS authentication methods.

      Identity Representations

      Windows

      Windows uses a Security Identifier (SID) to represent an account. This applies to both user and group accounts. A SID can be converted to an account name and vice-versa directly.

      NFS

      The representation used by NFS can take many forms depending upon the authentication method and the protocol version.

      The most widely used method is to represent an identity using a 32bit unsigned integer, for both users (UID) and groups (GID). This method can be used both by NFS V3 and NFS V4.1. Client for NFS and Server for NFS can convert to or from these identities and a Windows account using a mapping store which is populated with suitable mapping information.

      For NFS version V4.1, user and group identities can take the form of “account@dns_domain” or “numeric_id” where the numeric id is a string form of a UID or GID 32bit unsigned integer expressed as a decimal number (See RFC 5661 - http://tools.ietf.org/html/rfc5661, section 5.9). For the “account@dns_domain” format, Server for NFS can use this form of identity directly without any mapping. For the “numeric_id” format, Server for NFS uses the configured mapping store to convert this to a Windows account.  Client for NFS does not support NFS V4.1 in Windows 8 or Windows Server 2012.

      For both NFS V3 and V4.1, identities can also be encoded in a Kerberos ticket. Although the accessing account can be accurately represented and retrieved from the ticket, this form of identity is only used for authentication of requests and not as a general representation of an identity. So for example, for a READDIR request, the identity of the account making the request may well be encoded as part of the Kerberos mechanism to authenticate the request. However, the ownership of the objects in the reply will make use of UID, GID or “account@dns_domain” depending on the protocol and mapping information.

       

      Authentication Methods

      NFS protocols allow for several different authentication mechanisms. The most commonly encountered, and those supported by the Windows Server 2012 Server for NFS are

      • AUTH_NONE
      • AUTH_SYS (also known as AUTH_UNIX)
      • RPCSEC_GSS

      The AUTH_NONE mechanism is an anonymous method of authentication and has no means of identifying either user or group. Server for NFS will treat all accesses using AUTH_NONE as anonymous access attempts which may or may not succeed depending upon whether the export is configured to allow them.

      The AUTH_SYS mechanism is the most commonly used method and involves identifying both the user and the group by means of a 32bit unsigned integers known as UID and GID respectively. Special meaning is attached to a UID value of ‘0’ (zero) and is used to indicate the “root” superuser.

      The RPCSEC_GSS mechanism is a Kerberos V5 based protocol which uses Kerberos credentials to identify the user. It provides several levels of protection to the connection between an NFS client and an NFS server, namely

      • RPC_GSS_SVC_NONE where the request identifies the user, and sessions between the client and server are mutually authenticated. This identification is not based on UIDs and GIDs as provided by AUTH_SYS.
      • RPC_GSS_SVC_INTEGRITY where not only the client and server mutually authenticated, but the messages have their integrity validated.
      • RPC_GSS_SVC_PRIVACY where not only are the client and server mutually authenticated, but the message integrity is enforced and the message payloads are encrypted.

      This paper is only concerned with identity and identity mapping. For further details on how to use RPCSEC_GSS with the Windows Server 2012 Server for NFS see “NFS Kerberos Configuration with Linux Client”.

      Identity Mapping with RPCSEC_GSS

      When using RPCSEC_GSS to provide authentication, the Windows form of the identity of the user making the request can be obtained directly from the information in the request itself. And for some NFS operations that is sufficient. However for NFS V3 based accesses, the NFS protocol itself along with the companion NLM and NSM protocols makes explicit use of UID and GID values in requests (SETATTR), the explicit body of the replies (e.g. READDIR) and in the post-op attributes in replies to many requests. For example, when processing a GETATTR request, the reply contains the UID and GID for the object, so the Windows Server for NFS needs to convert the Windows style identity associated with the file from the file system and convert it to a UID/GID pair to send back to the client. Similarly, for NFS V4.1 based access, the protocol uses “account@dns_domain” or “numeric_id” strings ass account identifiers. So although the use of RPCSEC_GSS provides for better security on the connection between the NFS client and server, it does not replace the need for identity mapping.

      Identity Mapping Mechanisms

      In order to use the UID and GID values used in NFS requests, they need to be converted, or mapped, to identities that the underlying Windows platform can use. The Microsoft Server for NFS and Client for NFS provide several options to map identities from NFS requests each of which have a set of advantages and disadvantages

      • Active Directory

      Best used where established procedures are in use to manage user accounts, where there are many machines using a common set of users and groups and/or configurations where common files are shared using both NFS and SMB protocols (SMB is the standard Windows file sharing protocol)

      • Active Directory Lightweight Directory Services (AD LDS) or other RFC 2307 compliant identity store

      Best used where centralized management of machine local accounts is being used and identity mapping for multiple non-domain joined machines is required.

      • Username Mapping Protocol store (MS-UNMP)

      Legacy (deprecated) mapping solution available as a feature within Windows Server 2003 R2 and the Services for UNIX product. The mapping server itself is no longer supplied but Client for NFS and Server for NFS can be configured to use an existing mapping server. Information on the configuration and use of UNMP based mapping solutions can be found in the Microsoft TechNet article “User Name Mapping and Services for UNIX NFS Support” at http://technet.microsoft.com/en-us/library/bb463218.aspx.

      • Local passwd and group files

      Best used for standalone Client for NFS or standalone Server for NFS configurations where file sharing is performed using both NFS and SMB, and Windows domains are not readily available. Can be used for domain joined machines if required.

      • Unmapped UNIX Username Access (UUUA) (applies to Server for NFS using AUTH_SYS only).

      Best used for standalone Server for NFS configurations where there are no files being shared by both NFS and SMB and where little to no management of Windows identities is required. Can also be used for domain joined servers if files made available via an NFS export are only going to be accessed by  Server for NFS.

      There are a number of tools which are involved in managing this mapping information. They include

      • Server Manager UI.
      • Services for Network File System (NFS).
      • Server for NFS PowerShell cmdlets.
      • Command line utility nfsadmin (superseded by Server for NFS PowerShell cmdlets).
      • Standard Windows domain account management and scripting tools.

      PowerShell Cmdlets

      As part of Windows Server 2012, the Server for NFS sub-role has introduced a collection of cmdlets, several of which are used to manage the identity mapping information used by NFS. The cmdlets used to manage identity mapping include

      • Set-NfsMappingStore
      • Get-NfsMappingStore
      • Install-NfsMappingStore
      • Test-NfsMappingStore
      • Set-NfsMappedIdentity
      • Get-NfsMappedIdentity
      • New-NfsMappedIdentity
      • Remove-NfsMappedIdentity
      • Resolve-NfsMappedIdentity
      • Test-NfsMappedIdentity

       

      Active Directory

      This mechanism is only available to domain joined machines, both clients and servers and provides for common identities across a large number of machines and where files can be accessed by both NFS and SMB file sharing protocols.

      The mechanism makes use of the Active Directory schema updates to include the “uidNumber” and “gidNumber” attributes to user and group accounts for domains running at a functional level of Windows Server 2003 R2 or higher.

      Since these are standard fields in the account records any standard management tools and scripting methods can be used to manipulate these fields.

      The schema for account records in domains running at a functional level of Windows Server 2003 R2 or higher includes the fields “uidNumber” and “gidNumber” for user accounts and “gidNumber” for group accounts. If these fields are defined then the NFS client and server will automatically use the values as the UID and GID fields in NFS request operations and map those values to the associated Windows user and group accounts. Note that in user records, the assigned UID number must be unique for each user account, and similarly, for group account, the assigned gidNumber must be unique across all group accounts. Multiple user records can have the same value for gidNumber. If the PowerShell cmdlets are used to set mapping information for an account then the cmdlets will ensure there are no duplicate UIDs or GIDs. If other methods are used then the administrator should take care to ensure there is no improper duplication.

      Managing the mapping information will require domain level administrator privileges, namely those required to manage account attributes.

      To set the machine to use domain based mapping a PowerShell command can be used

       

      Set-NfsMappingStore -EnableADLookup $true

       

      or the Server Manager can be used. This starts the “Services for Network File System” window, and right-clicking on the “Services for NFS” node the properties dialog can be activated.


      To enable Active Directory based mapping, activate the Active Directory mapping source.

       

      Active Directory Lightweight Directory Services (AD LDS) or Other RFC 2307 Compliant Identity Store.

      This mechanism can be used with both domain and non-domain joined machines where the source of identity maps is stored in an RFC 2307 compliant store accessed via LDAP requests. This provides for a method of managing user identities and mapping information where access to files is going to be shared by non-NFS applications or file sharing methods, and either centralized management is required or preferred and there are too many machines to manage individually using local passwd and group files.

      A typical configuration would be where a number of Windows machines running Client for NFS and/or Server for NFS are arranged as a group of machines which share a set of common non-domain based identities. Using AD LDS these can be managed as a single set of identities, much like Active Directory, but without the need for a domain. This makes it simpler to manage identities than using local passwd and group files for any changes to identities and their mappings, since there is just a single location to manage rather than multiple sets of passwd and group files to maintain.

      The mechanism makes use of the RFC2307 schema for accounts where the uidNumber and gidNumber attributes are used to manage the user and group identity maps respectively.

      Managing the mapping information will require the privileges required to manage user and group accounts and their attributes. The specific privileges required will depend on the solution used.

      Note that although AD LDS can be used in a domain environment, there is little advantage in doing so and using the normal Active Directory mapping mechanism will probably prove to be easier to manage. However, using an AD LDS mapping store for domain joined machines can be useful in configurations where the central domain cannot be used as a mapping store for some reason. For example only a limited number of domain accounts require a mapping to be set and the central domain would require elevated permissions to modify the domain accounts directly (i.e. the administrator managing the NFS identity mappings is not the same as the domain administrator). Also, the account name cannot have a “domain\” prefix and so the name must make sense on the machine using the mapping. In practical terms this means that a non-domain joined machine must have a matching machine local account and a domain joined machine must have a matching domain account.

      To install Active Directory Lightweight Directory Services, a PowerShell command can be used

       

      Install-NfsMappingStore -InstanceName NfsAdLdsInstance

       

      This command will install and configure an AD LDS instance for use by NFS. This can be located on any Windows Server 2012 machine and need not be co-located with any Windows NFS client or server. When the command completes, if successful it will display output similar to the following

       

      Successfully created ADLDS instance named NfsAdLdsInstance
      on server NFS-SERVER, the instance is running on port 389 and the partition is
      CN=nfs,DC=nfs.

       

      To set the NFS client or server to use AD LDS based mapping, the following PowerShell command can be used

       

      Set-NfsMappingStore -EnableLdapLookup
      $true -LdapNamingContext "CN=nfs,DC=nfs" -LdapServer localhost:389

       

      Note that the “LdapNamingContext”  should be set to the value returned as the partition when the AD LDS instance was created. The “LdapServer” should be set to the machine name and port which to be used to contact the AD LDS instance.

      Alternatively the Server Manager  can be used to set the NFS client or server to use AD LDS based mapping.

       

      Username Mapping Protocol Server.

      This is a deprecated method of obtaining mapping information but may still be in use in existing environments. The UNMP Server was a feature in the separately installed Services for UNIX product, and in the Services for NFS feature of Windows Server 2003 R2 release.

      The UNMP server provided a source of UID/GID to Windows account mappings which could be used by domain joined machines running Client for NFS and/or Server for NFS. This feature has been largely superseded by the use of Active Directory which provides for better management and scaling.

       

      Local PASSWD and GROUP files.

      In simple configurations where mapping between UID/GID and Windows accounts is still required, the mapping information can be provided in UNIX style passwd and group files. These have the same fields and format as conventional UNIX passwd and group files with the exception that the account name can optionally make use of the standard Windows account names <domain>\<name>, where the "<domain>\" portion is optional and if absent, the “name” portion indicates a domain account for domain joined machines, or a machine local account for non-domain joined machines. If the machine is domain joined and the account to be mapped is a machine local account, the “domain” portion should be set to either “localhost” or to the name of the machine.

      The use of local passwd and group files is enabled by placing both files in the %SystemRoot%\system32\drivers\etc directory. That is, the local files mapping feature is enabled if both the following files exist

      • %SystemRoot%\system32\drivers\etc\passwd
      • %SystemRoot%\system32\drivers\etc\group

       

      This mapping method creates an independent mapping store for each machine and is typically used for

      • domain joined machines where a limited number of machines are making use of NFS
      • for standalone machines where a simple identity mapping mechanism is preferred, for example a single workstation accessing existing UNIX NFS servers
      • a set of UNIX workstations accessing a standalone Windows Server for NFS.

      Managing the mapping information will require the privileges needed to create and modify the passwd and group files in the %SystemRoot%\system32\drivers\etc directory. By default the members of the “BUILTIN\Administrators” group have sufficient privileges. It is recommended that these privilege requirements are not changed without a clear understanding of the consequences.

      Note that by default, files created in the %SystemRoot%\system32\drivers\etc directory will be readable by all members of the “BUILTIN\Users” group for the computer. If this is considered to be too great a degree of information disclosure then access can be restricted by adding read access permissions for the virtual accounts for the NFS services “NT Service\NfsService” and “NT Service\NfsClnt” to both the passwd and group files and then removing access permissions for the “BUILTIN\Users” group. This can be achieved as follows

      From a CMD or PowerShell prompt

       

      icacls group /inheritance:d /grant "NT
      SERVICE\NfsService:RX" /grant "NT SERVICE\NfsClnt:RX"

       

      icacls group /remove BUILTIN\Users

       

      icacls passwd /inheritance:d /grant "NT SERVICE\NfsService:RX" /grant "NT SERVICE\NfsClnt:RX"

       

      icacls passwd /remove BUILTIN\Users

       

      Or, via the Properties dialog Security tab for both the passwd and group files.

      To verify that the server is using file based mapping, the “Event Viewer” utility can be used to examine the ServicesForNfs-Server\IdentityMapping channel where the server will write messages to indicate the status of the mapping files.

       

      Unmapped UNIX User Access (UUUA)

      The UUUA identity mapping mechanism is only available to Server for NFS and can only be used when the AUTH_SYS authentication method is being used.

      In situations where there is no requirement to share files accessed by NFS with any other sharing mechanism (e.g. SMB) or local application, then Server for NFS can be configured to directly use the supplied UID/GID identifiers and attach them to files in such a way that the identity information is preserved and is available to an NFS client, but no mapping to any Windows account is required. This is particularly useful for turn-key installations where very little administration is required to set up Server for NFS.

      Server for NFS does this by recording the UNIX style UID, GID and mode information in the Windows file system security fields directly[1]. However, a consequence of this is that access to those files by other Windows applications can be problematic since the security information does not identify any Windows account and so standard Windows access mechanisms are not available.

      As the methods used by Server for NFS to capture the UID, GID and mode information result in the generation of valid security descriptor, there should be no impact for backup applications provided those applications just copy the data and do not try to interpret or manipulate it in any way.

      This method is typically used for standalone Windows Server for NFS installations where little to no configuration is required, such as a turnkey Windows Server 2012 Server for NFS where the only administration required is the creation of the NFS exports. It should be considered a convenience mechanism only as it provides no security (a consequence of the AUTH_SYS authentication method) and is effectively equivalent to access by an anonymous Windows user. The behavior is similar to many standard UNIX NFS server implementations.

      No privileges are required as there are no mappings to administer.

      Setting a Mapping for an Identity

      Active Directory and Active Directory Lightweight Directory Services

      As account objects are standard Windows Active Directory objects, any of the standard tools or scripting methods can be used. The account attributes used are “uidNumber” and “gidNumber” for user account type and “gidNumber” for group account types.

      These fields can be manipulated several utilities shipped with Windows Server 2012.

      The recommended method is to use the Server for NFS PowerShell cmdlets. These cmdlets can be used to query mappings for one or more existing accounts, modify mappings, test mappings and even create new accounts with mappings as a single operation. One of the advantages of using the PowerShell cmdlets to set mapping information is that they help ensure there are no duplicate UIDs or GIDs. Note that the following examples assume that an Active Directory or AD LDS mapping store has already been configured.

      To query the mapping for an existing account

       

      Get-NfsMappedIdentity
      -AccountName root -AccountType User

       

      Or to bulk query all the group accounts

       

      Get-NfsMappedIdentity -AccountType Group

       

      A bulk query for all the user accounts is performed in a similar manner, except that the AccountType is set to User. Simple wildcarding of account names can also be used, for example the following will return all the user accounts with names beginning with the prefix “nfs”.

       

      Get-NfsMappedIdentity -AccountType Group –AccountName nfs*

       

      To set a mapping for an existing user account

       

      Set-NfsMappedIdentity -UserName nfsuser14 -UserIdentifier 5014 -GroupIdentifier 4000

       

      Or to set the mapping for an existing group account

       

      Set-NfsMappedIdentity -GroupName specgroup -GroupIdentifier 500

       

       

      To create a set of new accounts and with their AUTH_SYS UID/GID mappings

       

      $secureString = ConvertTo-SecureString "password"
      -AsPlainText –Force

       

      New-NfsMappedIdentity -GroupIdentifier    0 -GroupName rootgroup

       

      New-NfsMappedIdentity -GroupIdentifier 4000
      -GroupName nfsusers

       

      New-NfsMappedIdentity -GroupIdentifier    0 -UserName root -UserIdentifier 0    -Password $secureString

       

      New-NfsMappedIdentity -GroupIdentifier 4000
      -UserName nfsuser1 -UserIdentifier 5001 -Password $secureString

       

      New-NfsMappedIdentity -GroupIdentifier 4000 -UserName
      nfsuser2 -UserIdentifier 5002 -Password $secureString

       

      New-NfsMappedIdentity -GroupIdentifier 4000
      -UserName nfsuser3 -UserIdentifier 5003 -Password $secureString

       

      New-NfsMappedIdentity -GroupIdentifier 4000
      -UserName nfsuser4 -UserIdentifier 5004 -Password $secureString

       

       

      An alternative and more  basic method is to use “adsiedit.msc” to manipulate the Active Directory objects directly. However there are few if any safeguards and extreme caution should be used for this method. This is not the preferred method of setting a mapping.

       

      Username Mapping Protocol Server

      Refer to the Windows Server 2003 R2 documentation ([NFSAUTH] Russel, C., "NFS Authentication", http://www.microsoft.com/technet/interopmigration/unix/sfu/nfsauth.mspx) for configuring mapping information for the identities being used.

      Local PASSWD and GROUP files

      As these are standard ANSI text files, any ANSI text editor can be used. The file format is the standard UNIX equivalents and the only active fields are the username, uid, and gid for the passwd file and the group name, gid and group list for the group file.

      Note that some of the PowerShell cmdlets can get used to query and test identity mappings set this way, but attempts to set or modify local file based mappings with the PowerShell cmdlets will fail. Note the following example assume that the local file-based mapping store has already been configured.

      For examples, to query the current mapping for a user account “root”

      Get-NfsMappedIdentity -AccountName root -AccountType User

       

      Or to query for the account name with the UID value of 500 

      Get-NfsMappedIdentity -AccountType User -UserIdentifier 500

       

      Bulk queries to fetch all the mappings in a single command can also be used but the wildcarding options available with the LDAP based mapping stores cannot be used directly but any standard PowerShell pipe based filters can be used as an alternative.

      How to Select a Mapping Method

      Generally there is no single mapping solution that is “correct” for any set of circumstances. Instead, many of the mechanism can be used based on a set of tradeoffs leading to a prioritized list drawn up from the available methods.

      Considerations

      • Is the machine Windows domain joined?
      • For servers, is file access going to be shared by both NFS and non-NFS methods (e.g. files also accessed via SMB shares, or other local applications)?
      • How many Windows machines are making use of NFS services (both client and server)?
      • How many individual users and groups are involved on the Windows machines making use of NFS services?
      • Security
        • Access control – Which NFS authentication protocol is in use? For example, RPCSEC_GSS implies a centrally managed account store and so an identity mapping store would be need to map the same accounts. Using the same store would remove the need for synchronization between the stores that would exist if an alternate mapping method were used.
        • Auditing – is an account identity required to monitor access?
      • NFS authentication method(s) used (e.g. AUTH_SYS etc)?
      • Organizational issues such as availability of the privileges needed to manage identities?
      • Network architecture and user environment? For example, are the connections between NFS clients and NFS server machines placed within a controlled environment (machine room, ipsec etc.)? Are NFS servers visible to machines on which users can run applications?

      To determine which solution is appropriate for a given situation requires the administrator to select from the available mechanisms according to the tradeoffs applicable to the expected environment. Typically, solutions should be considered in the following order:

      1. Active Directory
      2. Active Directory Lightweight Directory Services (AD LDS)
      3. Local passwd and group mapping files
      4. Unmapped UNIX User Access (UUUA)

       

      NFS Authentication Method

      Using AUTH_NONE as the authentication method has no security whatsoever and is equivalent to using anonymous access with AUTH_SYS.

      Using AUTH_SYS as the authentication method places no particular restrictions on the mapping method, consideration should be given to the ease with which this method can be spoofed and as such it provides no real security.

      If the environment requires that NFS be authenticated by RPCSEC_GSS then standard Windows accounts will be required. This excludes the use of Unmapped UNIX User Access. Although using RPCSEC_GSS directly provides the necessary rights to access files, a mapping solution is generally required since many NFS procedures identify users and groups via their UID and GID values even though access to those files is authenticated by RPCSEC_GSS. For example, using a Windows Server 2012 Server for NFS processing a READDIR request, the ability to read the directory is determined by the user identified through RPCSEC_GSS, but the ownership of the items in that directory are described by UID and GID values. Without a mapping solution, the server is unable to determine the proper UID and GID values and so will indicate the files are all owned by the configured anonymous user account, typically with UID and GID values of 0xfffffffe (or -2).

      Domain Joined Machines

      Generally the most convenient solution for domain joined machines is to use Active Directory as the mapping store. This is particularly the case if a large fraction of the domain joined machines and / or users will be making use of either or both of the NFS client and server. Using Active Directory helps ensure that there are none of the synchronization issues that occur if there are separate account stores and identity mapping stores.

      A possible problem is that if NFS is used by a small fraction of the accounts or machines, then in large organizations it may be organizationally difficult to manage the identities if for example a single department uses NFS and the departmental level administrators do not have the domain level privileges required to modify the centrally managed user accounts.

      If using Active Directory for mapping information is problematic but domain based identities are still desired then alternative solutions are either Active Directory Lightweight Directory Services (AD LDS) or local mapping files.

      Using AD LDS has the advantage of a centrally managed mapping store which is particularly useful if there are many user and/or group accounts, or if the valid accounts change frequently. The accounts being mapped must be domain accounts. However, there needs to be a machine available which can host the AD LDS services. This can be a machine hosting the Windows NFS services.

      Using local mapping files requires only machine local administrator level privileges rather than domain level privileges and provides all the functionality available for a single machine as that available through Active Directory. In addition, they can also allow machine local accounts to be successfully mapped. However, consideration should be given to the number of machines to be managed.

      Both AD LDS and local mapping files suffer from the need to maintain synchronization between the primary account store (Active Directory) and the mapping store (AD LDS or local files). For example, if a new NFS user account is added or deleted, then a change will need to be made to the mapping store. The AD LDS mapping store only needs changes to be applied in the one location for all machines using that mapping store to see the updates. However, if local mapping files are in use, then a change will need to be made in all of the copies of the local mapping files that contain a mapping for that account. 

      For machines with Server for NFS, if no domain or machine local identities are required and there will be no sharing of the files exported by NFS with any other application or file sharing protocol, and access is via the NFS AUTH_SYS authentication mechanism, then UUUA based access might be a good solution. This method has the advantage of minimal administration load, and there is no co-ordination with any other machine however it has the potentially significant disadvantage of providing essentially no security.

      Non-Domain Joined Machines

      The choices for non-domain joined machines are similar to those for domain joined machine with the exception that Active Directory is no longer available.

      Using Active Directory Lightweight Directory Services (AD LDS) provides a single centrally managed mapping store which is particularly useful if there are many user and/or group accounts, or if the valid accounts change frequently. The accounts being mapped must be machine local accounts and if care is taken over the naming of the account, the same mapping can be used by several machines.  However, there needs to be a machine available which can host the AD LDS services but this can be a machine hosting the Windows NFS services.

      Using local mapping files requires only machine local administrator level privileges and provides all the functionality available for a single machine as that available through AD LDS. As long as all the account names do not have a domain prefix, then machine local accounts are assumed so the same passwd/group file pair can be used on each machine. Consideration should be given to the number of machines to be managed and the amount of changes to the accounts being mapped to determine if the administrative costs are acceptable.

      Both AD LDS and local mapping files suffer from the need to maintain synchronization between the primary account store (machine local accounts) and the mapping store (AD LDS or local files). For example, if a new NFS user account is added or deleted, then a change will need to be made to the mapping store. The AD LDS mapping store only needs changes to be applied in the one location for all machines using that mapping store to see the updates. However, if local mapping files are in use, then a change will need to be made in all of the copies of the local mapping files that might be used by that account. Alternatively with local mapping files each machine can have individual passwd and group files with accounts specific to that machine; however this is likely to present administrative problems in terms of ensuring the appropriate uniqueness amongst the UID and GID values being used.

      For machines with configured with Server for NFS, if there is no sharing of the files exported by Server for NFS with any other application or file sharing protocol, and access is via the NFS AUTH_SYS authentication mechanism, then UUUA based access might be a good solution. This method has the advantage of minimal administration load, and there is no requirement for co-ordination with any other machine, however as with all AUTH_SYS based mechanisms, it has the potentially significant disadvantage of providing essentially no security.

      Troubleshooting

      To list all the NFS PowerShell cmdlets

      To locate all the NFS related PowerShell commands, start a PowerShell session and use the command

      Get-Help *Nfs*

       

       

      The alias “help” can be used in place of “Get-Help”.

      Get-help can then be used on individual items to get additional details on that item.

      • Get-NfsMappingStore will return the currently configured mapping solution for the machine.
      • Test-NfsMappingStore will test the mapping store to confirm that the machine can access the mapping store
      • Get-NfsMappedIdentity is used to retrieve one or more mapped identity records from the configured mapping store.
      • Test-NfsMappedIdentity is used to verify the configured mapping store can be reached from the machine on which the query is run and that the queried mapping is present in that store.
      • Resolve-NfsMappedIdentity is used to determine the mapping being used by Server for NFS. If the mapping is cached then the cached values are used, otherwise Server for NFS will make a request to the configured mapping store to retrieve the mapping.

      To Verify That a Particular Identity Mapping is Active

      Although the identity mapping can be set in an identity mapping store, there is no guarantee that machines with either Client for NFS and\or Server for NFS can make queries of that store. To determine if the store is accessible from the machine of interest, log on to the machine in question and using the PowerShell cmdlet “Test-NfsMappedIdentity”, the cmdlet will make a request to the store for the mapping information needed to satisfy the request. For example, to test the account mapping for UID value 0

      Test-NfsMappedIdentity -UserIdentifier 0

       

      Or to test the mapping for the group “specgroup”

      Test-NfsMappedIdentity -AccountName specgroup -AccountType Group

       

      There will only be output from the command if the test operation fails.

      Using the “Test-NfsMappedIdentity” cmdlet will also verify that the mapping information for the account in question does not use any improper duplicate values. That is, the UID value for a user account is unique and the GID value for a group account is unique.

      The Server for NFS also keeps a cache of recently used identity mappings. To determine the mapping as currently being used by, or failing that is available to Server for NFS, the Resolve-NfsMappedIdentity cmdlet can be used. This causes the Server for NFS to search the locally cached mapping information, or if there is no local value, to query the configured mapping store for the mapping. In both cases the currently active mapping as known to Server for NFS is returned.

      For example, to query for the account mapped to the UID 500

       

      Resolve-NfsMappedIdentity -Id 500 -AccountType User

       

      Or to query for the UID mapped to the user account “root”

      Resolve-NfsMappedIdentity -AccountName root -AccountType User

        

      To Verify The Windows NFS Client or Server Is Using Local File Based Mapping.

      The NFS services write messages to the ServicesForNfs-Server\IdentityMapping channel to indicate whether or not the local files have been found and if the format is correct. These messages can be examined using the “Event Viewer” utility. If both group and passwd files have been found and are being used there are two messages, one for each file

      For the group file


      and for the passwd file.

      If there are any issues with either file an appropriate message will indicate which file contains the problem.

      Correcting Identity Problems on Files and Directories Using The nfsfile.exe Utility

      The Services for NFS Administration Tools feature contains a command line utility, nfsfile.exe, which can be used to correct a number of NFS related identity and access permission related issues for both files and directories. It can also be used to convert files between the UUUA style mapping and Windows style mappings.

      For example, to set all the directories and files stored at v:\Shares to be owned by the user account “root” and group account “rootgroup” with UNIX style permissions 755 (rwxr-xr-x) use the command

      nfsfile /v /rwu=root /rwg=rootgroup /rm=755 v:\Shares\*

       

      or if all the files under an export were originally created using UUUA mapping, but there is now a domain based mapping solution available, all the file mappings can be converted using the command

       

      nfsfile /v /s /cw v:\Shares\share-v

       

      which converts the export and all the files and directories to a Windows style mapping based on standard Windows accounts.

      See the MSDN article at http://technet.microsoft.com/en-us/library/hh509022(v=WS.10).aspx and in particular the section titled “Using Nfsfile.exe to Manage User and Group Access”.

      Note that currently the nfsfile.exe cannot obtain mapping information from local file based mappings. This means it cannot do the automatic identity conversion between Windows style mapped files and UUUA style mapped files where the utility obtains the mapping information appropriate to the files being processed. Instead the account information must be supplied via the “/r” option, whether that is a UID/GID pair or a Windows user and group accounts on a file by file or single directory sub-tree basis. That is, all the files in a single directory sub-tree can be converted to a single identity in one command, but different users will require multiple commands to be used.

      Note also that the utility can also be used to manipulate non-NFS related file permissions. This is not recommended as there are several features of Windows file security and access control that the utility is not designed to process. Instead, the standard Windows file system permission management tools and utilities should be used (e.g. the “icacls.exe” utility).

      Feedback

      Please send feedback you might have to nfsfeed@microsoft.com

       



      [1] See [MS-FSSO] Section 8, Appendix A: Product Behavior, Note 7 (http://msdn.microsoft.com/en-us/library/ee380665(v=prot.10)).

      Server for Network File System First Share End-to-End

      $
      0
      0

      Introduction

      Server for Network File System (NFS) provides a file-sharing solution for enterprises that have a mixed Windows and UNIX environment. Server for NFS enables users to share and migrate files between computers running the Windows Server 2012 operating system using SMB protocol and UNIX-based computers using the NFS protocol.

      Today we will go through the process of how to provision a Server for NFS share on Windows Server 2012.   Note that provisioning on a Clustered Share Volume (CSV) and on ReFS is not supported in this release. This is based on NTFS volumes. The scenario we describe involves:

      • Install the Server for NFS role on the target Windows Server 2012 machine.
      • Provisioning a pre-existing directory c:\share on an NTFS volume with export name “share”.

      We will cover this process step by step in two different ways, namely PowerShell cmdlet and server manager UI. Following sections will introduce them one by one.

      PowerShell cmdlet Setup

      Server for NFS is a server role available on Windows Server 2012 operating system.

      Step 1: Install the Server for NFS role

      From the PowerShell cmdlet run the following command to make this server to also act as a NFS server:

      Add-WindowsFeature FS-NFS-Service

      Step 2: Provision a directory for NFS Sharing

      Authentication method, user identity mapping method, and permission of a Server for NFS share need to be configured when provisioning a directory for NFS sharing. The following PowerShell cmdlet provisions a new share with "auth_sys" authentication, unmapped access and with read-only permissions:

      New-NfsShare –Name share –Path c:\share –Authentication sys –EnableUnmappedAccess $True –Permission readonly


      The concepts and settings of user mapping as well as authentication methods are covered in blog post "NFS Identity Mapping in Windows Server 2012" at http://blogs.technet.com/b/filecab/archive/2012/10/09/nfs-identity-mapping-in-windows-server-2012.aspx.

      The concepts and settings of Kerberos authentication in detail is covered in blog post “How to NFS Kerberos Configuration with Linux Client” at http://blogs.technet.com/b/filecab/archive/2012/10/09/how-to-nfs-kerberos-configuration-with-linux-client.aspx.

      The concepts and settings of permission is covered in blog post “How to Perform Client Fencing” at http://blogs.technet.com/b/filecab/archive/2012/10/09/how-to-perform-client-fencing.aspx.

      UI based Setup

      Step 1: Install the Services for NFS role

      In Server Manager, choose Add Roles and Features from Manage menu item (Figure 1).


      Figure 1

      Figure 2

      This action pops up the Add Roles and Features Wizard (Figure 2). Press Next button to continue.


      Figure 3

      Select Role-based or feature-based installation radio button. Then click Next button to move to the next page.

      Figure 4

      After that, we select the destination server where we plan to deploy NFS server (Figure 4). Select Select a server from the server pool radio button, and choose the destination server. In our example we choose the server “nfsserver” as destination server. Click Next button to continue the process.

      Figure 5

      In this step, we select the server role Server of NFS check box from Roles’ tree view under File And Storage Services -> File services (Figure 5).

       

      Figure 6

      A confirmation pop-up window will arise (Figure 6). Follow its default setting and click Add Feature button.


      Figure 7

      After that, we will come back to the Select server roles step (Figure 5).  Press Next button to switch to the Select features page (Figure 7). In this page, we skip all feature settings and press Next button.


      Figure 8

      This is the last page of setting up NFS server role. Just click Install button to perform NFS server role setup. This process may take a while and you can always close the setup page and the process will run in background.

      Step 2: Provision a directory for Server for NFS Share

       


      Figure 9

      Go back to the dashboard of Server Manager, and choose File and Storage Service (Figure 9).


      Figure 10

      In this page, select the server from Servers, and click Shares (Figure 10).


      Figure 11

      In this page, click the link “To create a file share, start the New Share Wizard” to start the New Share Wizard (Figure 11).


      Figure 12

      After the New Share Wizard pops up, select “NFS Share - Quick” and click the Next button.


      Figure 13

      In this page, we customize the target folder we plan to share (Figure 13). In our example, we select the path c:\share. Click Next to go to next page.


      Figure 14

      Given a name of that NFS share, the wizard will generate the remote path of this share (Figure 14). In our case, the share name is “share”, and the remote (exported) path is “nfsserver:/share”. Click Next button to continue.


      Figure 15

      Now we enter the authentication page (Figure 15). Choose “No server authentication” for "auth_sys" authentication method and allow unmapped user access by selecting “Enable unmapped user access” and “Allow unmapped user access by UID/GID”.

      The concepts and settings of unmapped access and authentication methods are covered in blog post "NFS Identity Mapping in Windows Server 2012" at http://blogs.technet.com/b/filecab/archive/2012/10/09/nfs-identity-mapping-in-windows-server-2012.aspx.

      The concepts and settings of Kerberos authentication method in detail is covered in blog post “How to NFS Kerberos Configuration with Linux Client” at http://blogs.technet.com/b/filecab/archive/2012/10/09/how-to-nfs-kerberos-configuration-with-linux-client.aspx.

      Click Next to move on to the next page.

      Figure 16

      Add share permission by first click the Add button (Figure 16).

       

      Figure 17

      We assign read permission to all machines by choosing “Read Only” from share permissions and click Add button to add this permission (Figure 17). Then click Next button two times to the confirmation page (Figure 18). Click Create button to confirm the share creation process. The concepts and settings of permission will be covered in blog post “How to Perform Client Fencing” at http://blogs.technet.com/b/filecab/archive/2012/10/09/how-to-perform-client-fencing.aspx.

      Figure 18

      Click Create button and the wizard completes the share creation. After completion, close the wizard.

      Feedback

      Please send feedback you might have to nfsfeed@microsoft.com

      How to: NFS Kerberos Configuration with Linux Client

      $
      0
      0

      In this tutorial, we will provision NFS server provided by “Server for NFS” role in Windows Server 2012 for use with Linux based client with Kerberos security with RPCSEC_GSS.

      Background

      Traditionally NFS clients and servers use AUTH_SYS security. This essentially allows the clients to send authentication information by specifying the UID/GID of the UNIX user to an NFS Server. Each NFS request has the UID/GID of the UNIX user specified in the incoming request. This method of authentication provides minimal security as the client can spoof the request by specifying the UID/GID of a different user. This method of authentication is also vulnerable to tampering of the NFS request by some third party between the client and server on the network.

      RPCSEC_GSS provides a generic mechanism to use multiple security mechanisms with ONCRPC on which NFS requests are built (GSS mechanism is described in RFC 2203). It introduces three levels of security service: None (authentication at the RPC level), Integrity (protects the NFS payload from tampering), and Privacy (encrypts the entire NFS payload which protects the whole content from eavesdropping).

      Server for NFS server role (can be found within server role “File And Storage Services” under path “File And Storage Services /File and iSCSI Services/Server for NFS”) provides NFS server functionality that ships with Windows Server 2012. Server for NFS supports RPCSEC_GSS with Kerberos authentication, including all three levels of RPCSEC_GSS security service: krb5 (for RPCSEC_GSS None), krb5i (for RPCSEC_GSS Integrity), and krb5p (for RPCSEC_GSS Privacy) respectively.

      Explaining how to set up Kerberos security between a Linux client and a Windows server running Server for NFS can best be accomplished by way of a simple example. In this tutorial we'll consider the following infrastructure scenario:

      • Windows domain called CONTOSO.COM running Active Directory on a domain controller (DC) named contoso-dc.contoso.com.
      • Windows server running Server for NFS with host name: windowsnfsserver. contoso.com
      • Linux client machine running Fedora 16 with host name: linuxclient. contoso.com
      • Linux user on Fedora 16 client machine: linuxuser
      • Windows user that mapped Linux user on Fedora 16 client machine: CONTOSO\linuxclientuser-nfs
      • Kerberos encryption: AES256-CTS-HMAC-SHA1-96

      For the purpose of configuration, we assume that the Linux client is running Fedora 16 with kernel version 3.3.1. Windows server is running Windows Server 2012 with server for NFS role installed. DC is running Windows Server 2012 with DNS Manager, Active Directory Administrative Center and “setspn” command line tool installed.

      Configuration Steps

      In this section, we will go through 3 steps for the purpose of enable NFS with Kerberos authentication:

      1. Basics
      2. Set up Linux machine with Kerberos authentication.
      3. Provision NFS share on Windows Server 2012 with Kerberos authentication.

      In step 1, we are going to check DNS and make sure that both NFS and RPCGSS are installed on Linux machine. In step 2, we are going to set up the Linux machine to join Windows domain.  After that, we will configure service principal name (SPN) for Kerberos and distribute SPN generated key to Linux machine for authentication.

      Step 1: Basics

      First, make sure that DNS name resolution is working properly using between the DC, the Windows NFS Server, and the Linux client. One caveat for the Linux client is that the hostname should be set to its fully qualified domain name (FQDN) in the Windows domain. Running “hostname” on Linux machine and check whether host name is correct. (In command boxes, bold text is the command we type in and its result shows in normal style without bold.):

      [root@linuclient]# hostname

      linuxclient.contoso.com

      Details of setting hostname for Fedora 16 machine can be found in Fedora 16 Doc with URL: http://docs.fedoraproject.org/en-US/Fedora/16/html/System_Administrators_Guide/ch-The_sysconfig_Directory.html#s2-sysconfig-network.

      Also make sure that NFS and RPCGSS module are successfully installed and started up in this Linux machine. Following example shows how to use “yum” patching tool to install NFS on Fedora 16 client machine:

      [root@linuxclient]# yum install nfs-utils

      and load Kerberos 5 by run:

      [root@linuxclient]# modprobe rpcsec_gss_krb5

      and start rpcgss service by run:

      [root@linuxclient]# rpc.gssd start

       

       

      Step 2: Set up Linux machine with Kerberos authentication

      Step 2.1: Add Linux machine to DNS in DC

      In this step, we need to log into the DC and add an entry to the DNS Manager as follows:


      Figure 1

      The IP address of Linux client can be found by running “ifconfig” command in Linux terminal. In our case, we stick to Ipv4 address, the IP address of our Linux client machine is “10.123.180.146”.

      Reverse DNS mapping can be verified by command “dig –x 10.123.180.146” from Linux side, where “10.123.180.146” should be replaced with the actual IP address of your Linux machine. DNS settings may need time to propagating among DNS servers. Please wait a while until dig command returns the right answer.

      Step 2.2: Join Linux machine to the domain

      Now we're going to configure Linux client to get Kerberos tickets from the Windows domain it is going to join (in our case “CONTOSO.COM”). This is done by editing the “/etc/krb5.conf” file. There should be an existing file with some placeholders which can be edited. We're going to add two lines under “[libdefaults]” for “default_realm” and “default_tkt_enctypes”. We're also going to add a realm in “[realms]” filling in the following fields: “kdc”, “admin_server”. Moreover, we are going to add two lines in the “[domain_realm]” section.

      The end result should look something like (text we added is marked in Italic):

       

      [libdefaults]

      default_realm = CONTOSO.COM

      dns_lookup_realm = false

      dns_lookup_kdc = false

      ticket_lifetime = 24h

      renew_lifetime = 7d

      forwardable = true

      default_tkt_enctypes = aes256-cts-hmac-sha1-96

       

      [realms]

      CONTOSO.COM = {

       kdc =
      contoso-dc.contoso.com

       admin_server = contoso-dc.contoso.com

      }

       

      [domain_realm]

      .contoso.com = CONTOSO.COM

      contoso.com = CONTOSO.COM

       

       

      Step 2.3: Configure Kerberos service principal name

      I'll explain a bit how authentication works from the NFS standpoint. When a Linux client wants to authenticate with Windows NFS server by Kerberos, it needs some other "user" (called a "service principal name" or SPN in Kerberos) to authenticate with. In other words, when a NFS share is mounted, the Linux client tries to authenticate itself with a particular SPN structured as “nfs/FQDN@domain_realm”, where “FQDN” is the fully qualified domain name of the NFS server and “domain_realm” is the domain where both Linux client and Windows NFS have already joined.

      In our case, Linux client is going to look for “nfs/windowsnfsserver. contoso.com@CONTOSO.COM”. For this SPN, we're just going to create it and link it to the existing “machine” account of our NFS as an alias for that machine account. We run the “setspn” command from command prompt on DC to create SPN:

      setspn –A nfs/windowsnfsserver windowsnfsserver

      setspn –A nfs/windowsnfsserver.contoso.com windowsnfsserver

       

      You can refer following articles to know more about SPN and “setspn” command.

      http://msdn.microsoft.com/en-us/library/aa480609.aspx

      User on Linux client will use the same style (i.e. nfs/FQDN@domain_realm where “FQDN” is the FQDN of the Linux client itself) as its own principal to authenticate with DC. In our case, principal for Linux client user is “nfs/linuxclient.contoso.com@CONTOSO.COM”. We're going to create some user in AD representing this principal, but “/” is not a valid character for AD account names and we cannot directly create an account which looks like “nfs/FQDN”. What we are going to do is to pick a different name as account and link it to that principal.  On DC, we create a new user account in Active Directory Administrative Center (Figure 2) and set up a link between this account and Kerberos SPN through “setspn” tool as we did for NFS server SPN.


      Figure 2

      In our case, both first name and full name are set to “linuxclientuser-nfs”. User UPN logon is “nfs/linuxclient.contoso.com@CONTOSO.COM”. User SamAccountName is set to contoso\linuxclientuser-nfs. Be sure to choose the correct encryption options, namely “Kerberos AES 256 bit encryption” and “Do not require Kerberos pre-authentication”, to make sure AES encryption works for GSS Kerberos. (Figure 3)


      Figure 3

      Now, we're going to set the SPNs on this account by running the following command in DC’s command prompt:

      setspn –A nfs/linuxclient linuxclient-nfs

      setspn –A nfs/linuxclient.contoso.com linuxclient-nfs

       

      Fedora 16 Linux client needs to use the SPN without actually typing in a password for that account when doing mount operation. This is accomplished with a "keytab" file.

      We're going to export keytab files for these accounts. On DC run following command from command prompt:

      ktpass –princ nfs/linuxclient.contoso.com@CONTOSO.COM –mapuser linuxclientuser -nfs –pass [ITSPASSWORD] –crypto All –out nfs.keytab

      “[ITSPASSWORD]” needs to be replaced by a real password chosen by us. Then copy nfs.keytab to Linux client machine. On Linux client machine we're going to merge these files in the keytab file. From the directory where the files were copied, we run "ktutil" to merge keytabs. In this interactive tool run the following commands:

      [root@linuxclient]# ktutil

      rkt nfs.keytab

      wkt /etc/krb5.keytab

      q

      Great, now Linux client should be able to get tickets for this account without typing any passwords. Test this out:

      kinit –k nfs/linuxclient.contoso.com

       

      Note that Linux client will try three different SPNs (namely host/linuxclient, root/linuxclient, and nfs/linuxclinet) to connect to NFS server.  Fedora 16 will go through keytab file we generated from DC and find those SPNs one by one until the first valid SPN is found, so it is enough for us to just configure “nfs/linuxclient” principal. As a backup plan, you may try to configure other SPNs if “nfs/linuxclient” does not work.

      Step 3: Provision NFS share on Windows Server 2012 with Kerberos authentication and Test NFS Kerberos v5 from Linux

      Now we can create windows share with Kerberos v5 authentication and mount that share from Linux client. We can approach this by run PowerShell command:

      New-NfsShare –Name share –Path C:\share –Authentication krb5,krb5i,krb5p -EnableAnonymousAccess 0 –EnableUnmappedAccess 0 –Permission readwrite

       

      More details about how to setup NFS share could be found in blog post “Server for Network File System First Share End-to-End” at http://blogs.technet.com/b/filecab/archive/2012/10/08/server-for-network-file-system-first-share-end-to-end.aspx.

      Now we are going to mount that share from Linux machine through NFS V4.1 protocol. On Linux client run:

      [root@linuxclient]# mount –o sec=krb5,vers=4,minorversion=1 windowsnfsserver:/share  /mnt/share

       

      In “sec” option, we can choose different quality of service (QOP) from “krb5”, “krb5i”, and “krb5p”. In “vers” option, we can choose to mount the share through NFS V2/3 protocol by replacing “vers=4,minorversion=1” to “vers=3” for NFSv3 or “vers=2” for NFSv2. In our case, “/mnt/share” is the mount point we choose for NFS share. You may modify it to meet your need.

      After that, we can get access to mounted position from a normal linux client user by requiring the Kerberos ticket for that user. In our case, we run kinit from linuxuser user on Linux machine:

      [linuxuser@linuxclient]# kinit nfs/linuxclient.contoso.com

       

      Note that we do not need keytab to visit mounted directory, so we do not need to specify “-k” option for kinit. That linux user we run “kinit” should have privilege to read key tab file “krb5.keytab” under path “/etc”. All actions performed by linuxuser will then be treated as the domain user linuxclientuser-nfs on Windows NFS server.

      Notes

      RPCGSS Kerberos with privacy

      RPCGSS Kerberos with privacy does not work with current release of Fedora 16 because of a bug reported here:  

      https://bugzilla.redhat.com/show_bug.cgi?id=796992

      You can refer it to find the patch in Fedora patch database to make it work after they fix it.

      NFS Kerberos with DES Encryption

      Windows domain uses AES by default. If you choose to use DES encryption, you need to configure the whole domain with DES enabled. Here are two articles telling you how to do that:

      http://support.microsoft.com/kb/977321

      http://support.microsoft.com/kb/961302/en-us

      The Windows machine must also set the local security policy to allow all supported Kerberos security mechanisms. Here is an article talking about how to configure Windows for Kerberos Supported Encryption as well as what encryption types we have for Kerberos:

      http://blogs.msdn.com/b/openspecification/archive/2011/05/31/windows-configurations-for-kerberos-supported-encryption-type.aspx

      After enabling DES on domain/machines/accounts passwords on accounts must be reset to generate DES keys. After that, we can follow the same configuration steps in previous section to mount NFS share with Kerberos. There is one exception that we need to add one additional line to “[libdefaults]” section of “/etc/krb5.conf” to enable “weak crypto” just like DES:

      allow_weak_crypto= true

      Troubleshooting

      DNS look up failure

      DNS server need time to propagate Linux client host names, especially for complicate subnet with multi-layers of domains. We can do some trick by specifying DNS lookup server priority on Linux client by modifying /etc/resolv.conf: 

      # Generated by NetworkManager

      domain contoso.com

      search contoso.com

      nameserver: your preferred DNS server IP

      Kerberos does not work properly

      The Linux kernel's implementation of rpcsec_gss depends on the user space daemon rpc.gssd to establish security contexts. If Linux fails to establish GSS context, this daemon is the first place for troubleshooting.

      First, make sure that rpcsec_gss is running. Run “rpc.gssd –f –vvv”

      [root@linuxclient]# rpc.gssd –f –vvv

      beginning poll

       

      Ideally, the terminal will be blocked and polling GSS requests. If it stops right after running that command, you’d better reboot Linux. rpc.gssd itself is also a source of debugging Kerberos context switch. It will print out result of each Kerberos authentication steps and their results.

      NFS Access Denial

      The most error message from mounting NFS share from Linux is access denial. Unfortunately, Linux terminal does not provide additional clue of what causes failure. Wireshark is a nice tool to decode NFS packets. We can use it to find out error code from server replay message of compounds.

      Feedback

      Please send feedback you might have to nfsfeed@microsoft.com


      How to Perform Client Fencing?

      $
      0
      0

      In this article, we will talk about how we evaluate client fencing for Server for NFS role in Windows Server 2012, from targeting client permission to calculating combined permission result. We will also cover the caching strategy we implemented and lookup mechanism for netgroups. Lookup data are located in local cache, Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP) store, Active Directory Domain Services (AD DS), Active Directory Light Weight Directory Services (AD LDS) or any third party LDAP store.  

      Client Fencing Setup

      A Server for NFS share is a share that is exported by the Windows Server 2012. It contains information like share name and share path. Here is a sample of an NFS share named “share” by running PowerShell cmdlet Get-NfsShare:

      Get-NfsShare share

      The result may look like below (Figure 1):

      Figure 1

      Client fencing is a mechanism which authorizes access rights of a machine (we call it a host or remote host) or a group of machines for this share. Server for NFS supports two types of groups:

      • clientgroup: Set of client machines as a group which is configured locally on the NFS server, and they all have similar permissions.
      • netgroup: Set of client machines as a group configured on NIS or LDAP, makes configuration accessible across servers and platforms within same subnet. A netgroup is identified by its full qualified domain name (FQDN).

      Client fencing is evaluated by means of calculating set of client permissions for this particular share. A particular client permission defines the read/write access right of a client (a host or a group of hosts). Moreover, it contains information about whether users of client can act as a Linux root user, i.e. be assigned administrator privilege (controlled by AllowRootAccess field) and what is the language encoding of this client. PowerShell cmdlet Get-NfsSharePermission shows the share permission settings of a share:

      Get-NfsSharePermission share

      The result may look like below (Figure 2).

      Figure 2

      In the example above, share named “share” is configured with

      • an individual host permission (machine with IP 10.123.180.162) to allow read/write permission and root access,
      • a clientgroup permission (for client group “ClientGroup”) to allow read/write permission,
      • a netgroup permission (for netgroup “Group1”) to allow only read permission,
      • and deny all other machine access by configures “All Machines” permission to deny access.

      Windows NFS server provides several tools to configure client fencing. In this document, we will demonstrate three of them: NFS management UI, PowerShell cmdlet, and NFS command line tools.  This can best be accomplished by way of a simple example. In the following section, we will configure the “read” permission of a share following infrastructure scenario:

      • Client (FQDN: nfsclient.contoso.com)
      • Windows server 2012 running Server for NFS (FQDN: nfsserver.contoso.com)
      • Server for NFS share (export path: “NFSSERVER:/share”)

      NFS management UI

      From server manager on NFS server choose share “share” from a list of shares from “File and Storage Services” tab, then right click it and select properties to open share Properties. Then choose Share Permissions and click Add button. A pop-up window will show up as in Figure 3 below:

      Figure 3

      We select the Host radio button and type in the host name (or IP address) of the client and then select Read Only share permission. Click Add button to add permission for this share. Note that configuring netgroup or clientgroup instead of hosts follows the same UI. Just select “Netgroup” or “Client group” radio button instead of Host to add permission for Netgroup or client groups.

      PowerShell cmdlet

      Server for NFS implemented a collection of PowerShell cmdlets to manage Server for NFS shares.

      • Grant-NfsSharePermission: Grant a particular permission to a Server for NFS share
      • Get-NfsSharePermission: Retrieve the permissions that have been configured for a given Server for NFS share
      • Revoke-NfsSharePermission: Revoke existing permissions that have been granted to a Server for NFS share

      Server for NFS PowerShell cmdlet should be loaded when Server for NFS role is enabled. Type the following command to grant read permission for the client:

      Grant-NfsSharePermission –Name share –Permission readonly –ClientName nfsclient.contoso.com –ClientType Host

      NFS command line tools

      NFS command line tools use existing WMIv2 provider that ships with Windows server 2012. We can use nfsshare.exe executable to manage Server for NFS shares. For granting share permission, run following command from command prompt:

      nfsshare share –o ro=nfsclient.contoso.com

      More options can be set to list current share permissions as well as revoke previous assigned permissions. Details of this tool can be found using the URL: http://msdn.microsoft.com/en-us/library/cc753529.

      Client Fencing Evaluation

      When a host sends a request to a Server for NFS share, Windows Server 2012 will evaluate the client permissions of this host on this share, following steps shown in Figure 4.

      Figure 4

      Server will first check whether client individual host permission of this share includes this incoming host. This task parses Server for NFS share’s individual host permission list and looks for a match. If it succeeds, the matching individual host permission is returned. Otherwise, server will check clientgroup permission configured for this Server for NFS share. Given IP address of this host, server looks through local client group list and finds any client group this host belongs to.  Permissions from all hit clientgroups are combined and then returned.  Note that, we only support IPv4 address as input of clientgroup lookup. All IPv6 addresses are treated as not match. If neither individual host permission nor clientgroup finds a match, we give it another try by searching all netgroups this host belongs to and comparing with netgroup list of this share. For performance consideration, we cache netgroup per host data to avoid frequently downloading netgroup data from NIS or LDAP. I will talk about our caching strategy in latter section. Given a list of netgroups this host belongs to, we query existing setup for any of them in this share and return the permission of the first matching netgroup. If none of previous steps finds an eligible permission setting, we return the global permission of this NFS share as the permission of this host.

      Caching Strategy

      In Server for NFS implementation, we cache netgroup per host for the purpose of looking up netgroup permissions for a particular remote host machine. It is a first-in-first-out (FIFO) queue of remote hosts and each host maintains a list of netgroups this host belongs to.

      When querying netgroup permission for a given host, we first look through the netgroup per host cache. During cache miss (or) cache expiration conditions, the cache is updated by querying and downloading data from NIS or LDAP We keep netgroup per host cache up-to-date by setting a creation time stamp and an expiration time for it. This expiration time can be configured through PowerShell cmdlet. Here is an example to configure cache timeout to be 30 seconds.

       

      Set-NfsServerConfiguration –NetgroupCacheTimeoutSec 30

      Format Issue for Netgroup Lookup

      Before doing the query for netgroup, the IP address of the host we are looking for will be translated into FQDN via reverse DNS lookup. If it fails to find one, the lookup mechanism will look for the corresponding entry with KEY as "*.*" for wildcard group. We then query NIS or LDAP for netgroups given FQDN. Depending different NIS Configuration (domain field in /etc/netgroup), the format of netgroup by host data might be the one of the followings:

       

      server.contoso                                 domain1 domain2

      server.contoso.*                               domain1 domain2

      We will try both to see if it matches.

      Feedback

      Please send feedback you might have to nfsfeed@microsoft.com

      Automatic RMS protection of non-MS Office files using FCI and Rights Protected Folder Explorer

      $
      0
      0

       FCI

      File Classification Infrastructure(FCI) provides insight into your data by automating classification processes so that you can manage your data more effectively. The built-in solution for file classification provides expiration, custom tasks, and reporting. The extensible infrastructure enables Microsoft partners to construct rich end-to-end classification solutions that are built upon Windows. For more information on FCI please check the blog post

      Rights Protected Folder Explorer

      Rights Protected Folder Explorer (RPFe) is a Windows based application that allows you to protect files and folders. A Rights Protected Folder is similar to a file folder in that it contains files and folders. However, a Rights Protected Folder controls access to the files that it contains, no matter where the Rights Protected Folder is located. By using Rights Protected Folder Explorer, you can securely store or send files to authorized users and control which users will be able to access those files while they are in the Rights Protected Folder. For more information please visit the RPFe  blog post.

      Protecting highly sensitive data using FCI and RPFe

      Today, FCI enabled administrators to automatically RMS protect sensitive information on file servers. We had several requests for enabling FCI to RMS protect other file types and we partnered with the RPFe team to provide a solution that enable that scenario.

      Using FCI and RPFe, IT admins can Rights Management Services(RMS) protect any file on a file server. Once the files are protected, only authorized users will be able access those files even if they are copied to another location. To protect non-Microsoft Office file format,  FCI File Management job(FMJ) with custom action and RPFe can be used. We will now explore how to accomplish the task of protecting sensitive files other than Microsoft Office files.  RPFe has a command line utility that can protect files. FCI File Management Job custom action can be used to invoke RPFe command line utility under a desired namespace/Share where the admin wants to protect files automatically.

      RPFe Usage:

      • ·         Install RPFe tool on the file server by following the guidelines from here
      • ·         Get the RMS template GUID to be used in the cmd line version. Go to %LOCALAPPDATA% \Microsoft\MSIPC\Templates on the File Server and open the XML file for the template of interest and get the GUID associated with OBJECT ID.
      • ·         Command line usage to protect a file

      RPFExplorer.exe /Create /Rpf:"G:\Share\CustomerInfo.txt.rpf" /TemplateId:{00a956d6-d14c-4a2c-bf86-c1e70b731e7b} /File:"G:\Share\ CustomerInfo.txt "

      • o   Parameter Explanation:
        • §  /File is the file that needs to be protected.
        • §  /Rpf is for the new file that will be created which is RMS protected
        • §  /TemplateID is the RMS Template GUID gathered from step 2 above.

       

      RPFe Protection

      Original file stays the way it is and there is no change made to it. New RMS protected RPFe container is created which will contain a copy of the original file under the same parent directory.

      FCI Integration with RPFe

      To automate file protection using RPFe and FCI, Please follow the steps mentioned below.  The FMJ custom action calls a PowerShell script for each file that meets the FMJ condition. The PowerShell  script calls RPFe command line utility to protect files.

       

      Create a File Management Job with custom action on a desired share with the following configurations

      • ·         Install RPFe tool from here
      • ·         Copy the script in the blog below to a new file called %SystemRoot%\System32\FCIRPFeFileProtection.ps1
      • ·         For exe path, set the parameter to “%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe”
      • ·         For Arguments, set the arguments to File -File “%SystemRoot%\System32\FCIRPFeFileProtection.ps1”  TemplateID [Source File Path] [Source File Owner Email] [Admin Email]
      • ·         Configure the file extensions for files that need to be protected in the condition tab of the FMJ creation wizard. It is recommended to restrict the FMJ to specific file extensions only
      • ·         Additional filters can be added to filter files based on classification properties in the FMJ
      • ·         Mark the File Management Job as continuous in the schedule tab of the FMJ creation wizard

       

      File Protection Script

      • ·         FCIRPFeFileProtection.ps1 is a simple PowerShell script that takes in source file path, file owner email, admin email and Template ID as parameters from the File Management Job and calls in RPFe command line utility to protect files. A protected copy of the original file is created in the same folder where the original file existed.
      • ·         The script copies all classification properties from the source file to the protected file. This ensures that all classification information is passed on from the original file to the protected file.
      • ·         Please make sure to configure the FMJ to run on specific extensions. If the FMJ is marked as continues and configured to run on all file types, ( say on P:\foo) and a new file P:\foo\file.txt is created then recursive FMJ kicks in. First P:\foo\file.txt.rpf is created which will cause RMJ to act on it creating P:\foo\file.txt.rpf.rpf and so forth. It is recommended to restrict the FMJ to specific file extensions only.
      • ·         Please note that the script creates a protected copy for a file and the original file still remains in the share. Care has to be taken to have enough space on a volume to accommodate protected copies of sensitive data. If you intend to delete the original file after the file is successfully protected please remove the comment in line “remove-item $encryptfile” and test it in your environment before deployment.
      • ·         Script returns error back to the FMJ. Any file that encountered an error will be reported in the FMJ error report and log.
      • ·         FCIRPFeFileProtection.ps1 -: Below is a sample PowerShell script that protects files using RPFe

       

      #

      #             Main Routine Begin

      #

      $TemplateID = $args[0]

       $encryptfile =  $args[1]

      $newfile = $encryptfile + ".rpf"

       # verify that the new file name does not exist and attempt to find a new name

      $ver = 0

      while (Test-Path $newfile)

      {

         $ver = $ver + 1

         $newfile = $encryptfile + $ver + ".rpf"

         if ($ver –gt 100) {

                      exit  -1 # could not find a good name for the rpf file

               }

      }

       # get the owner of the file, if not found use the supplied administrator email address

      $owneremail = $args[2]

      if ($owneremail -eq "[Source")

      {

         $owneremail = $args[6]

      }

       # run the RPF Explorer to encrypt the file

      $arguments = "/Create /Rpf:"+ "`""+$newfile +"`"" +" /TemplateId:"+ $TemplateID +" /File:"+"`""+$encryptfile +"`"" +" /Owner:"+$owneremail

      $run = start-process –Wait –PassThru –FilePath "C:\Microsoft_Rights_Protected_Folder_Explorer\RPFExplorer.exe" –ArgumentList $arguments

      if ($run.ExitCode –eq 0)

      {

           # transfer properties from the old file to the new file

         $cm = New-Object -comobject FSRM.FSRMClassificationManager        

         $props = $cm.EnumFileProperties($encryptfile, 1)

          try

          {

             foreach ($prop in $props)

             {

                 $cm.SetFileProperty($newfile, $prop.Name, $prop.Value)

             }

          } catch [Exception] {

             remove-item $newfile

             exit -1

          }

       

      # remove-item $encryptfile

      # The original file can be removed after successfully creating a protected copy.

      # Before adding the above remove-item line, please test in your environment and verify that there is no data loss

       }

      exit $run.ExitCode

       #

      #             Main routine end

      #

       RPFe files on Non-Windows machines

      RPF files don’t get recognized on other non-windows devices. This is because there is no AD RMS client available on non-windows platforms. Also non-windows users wont be able to consume RPF files.

       

       



       

       

       

       

       

      Introducing DFS Namespaces Windows PowerShell Cmdlets

      $
      0
      0

      Overview

      In this blog post, let me introduce you to the new DFS Namespaces (DFSN) Windows PowerShell cmdlets that we have added in Windows Server 2012.

      Windows PowerShell is designed for automation and complex scripting, in part due to its powerful pipelining feature. Its object-based design streamlines user experience into focusing on the tasks you want to accomplish, and allows sophisticated output and input data processing. If you need further convincing on the value in using it for all your system administration needs, check out the extensive Windows PowerShell TechNet material.

      The DFSN PS cmdlets cover most of the DFSN server-side management functionality that was previously available through the dfsutil command. As a side note, note that DFSN client-side management functionality, e.g. flush the local referral cache, is not yet supported through the new cmdlets – this functionality of course continues to be available through the “dfsutil client ..” command line tool. The really nice thing about the DFSN PS cmdlets is that while they run only on Windows Server 2012 or Windows 8 computers, you can use them to manage DFS Namespaces hosted even on previous Windows Server versions: Windows Server 2008, and Windows Server 2008 R2.

      I suspect most of you reading this blog post have used DFS Namespaces for a while, so the namespace concepts and terms in the following discussion should be very familiar. In case you need to refresh your terminology, Overview of DFS Namespaces is a good one to refer to.

      At the top-level, the new cmdlets fall into one of the following categories, here is the quick tour:

      1. Namespace-scoped: Each DFS namespace presents one virtual folder view. This set of cmdlets operates on one or more such DFS namespace(s).
      2. Namespace root target-scoped: Each DFS namespace can have one or more root targets - think of a root target as an SMB share presenting the namespace folder structure. This set of cmdlets acts on a root target.
      3. Namespace server-scoped: Each DFS namespace server can host one or more namespace root targets. This set of cmdlets acts on a namespace server at the aggregate level.
      4. Namespace folder-scoped: Each DFS namespace consists typically of a number of namespace folders organized in a virtual folder hierarchy. This set of cmdlets acts on one or more such namespace folders.
      5. Namespace folder target-scoped: Each “bottom-most” DFS namespace folder or a leaf node in the folder hierarchy is associated with one or more folder targets where the real data is stored (such folders with associated folder targets were referred to as “links” in previous versions). This set of cmdlets acts on one or more such namespace folder targets.

      Let us explore each of these categories of cmdlets in the same order.

      Namespace-scoped cmdlets

      This set of cmdlets provides Get/Set/New/Remove operations (called verbs in PS parlance) on a “DfsnRoot” object – which represents a DFS namespace.

      Cmdlet

      Description

      Get-DfsnRoot

      The Get-DfsnRoot cmdlet retrieves the configuration settings for the specified – or all the known - namespaces.

      New-DfsnRoot

      The New-DfsnRoot cmdlet creates a new DFS namespace with the specified configuration settings.

      Set-DfsnRoot

      The Set-DfsnRoot cmdlet modifies the configuration settings for the specified existing DFS namespace.

      Remove-DfsnRoot

      The Remove-DfsnRoot cmdlet deletes an existing DFS namespace.

      Here are a few examples:

      • Get the namespace information for a standalone namespace \\Contoso_fs\Public

      PS C:\> Get-DfsnRoot –Path \\Contoso_fs\Public | Format-List

      Path : \\Contoso_fs\Public

      Description : Standalone test namespace

      Type : Standalone

      State : Online

      Flags : Site Costing

      TimeToLiveSec : 300

      • Create a new Windows Server 2008 mode namespace

      PS C:\> New-DfsnRoot –Path \\corp.Contoso.com\Sales -TargetPath \\contoso_fs\Sales -Type Domainv2 | Format-List

      Path : \\corp.Contoso.com\Sales

      Description : Domain-based test namespace

      Type : Domain V2

      State : Online

      Flags :

      TimeToLiveSec : 300

      Note: TargetPath is the path to an SMB share to be used as the root target for this namespace. It is just as easy to create the SMB share using Windows PowerShell, run the following on file server \\contoso_fs:

      New-Item C:\Sales_root_folder –type directory

      New-SmbShare –Name Sales –Path C:\Sales_root_folder

      PS C:\> Set-DfsnRoot –Path \\corp.Contoso.com\Sales -EnableRootScalability $true -TimeToLive 400

      Path : \\corp.Contoso.com\Sales

      Description : Domain-based test namespace

      Type : Domain V2

      State : Online

      Flags : Root Scalability

      TimeToLiveSec : 400

      • Remove a domain-based namespace

      PS C:\> Remove-DfsnRoot -Path \\corp.Contoso.com\Sales -Force

      Namespace Root Target-scoped

      These cmdlets support the same Get/Set/New/Remove operations, but on root targets. And remember that there can be multiple active root targets for a domain-based DFS namespace (which is why domain-based namespaces are generally the recommended option).

      Cmdlet

      Description

      Get-DfsnRootTarget

      The Get-DfsnRootTarget cmdlet by default retrieves all the configured root targets for the specified namespace root, including the configuration settings of each root target.

      New-DfsnRootTarget

      The New-DfsnRootTarget cmdlet adds a new root target with the specified configuration settings to an existing DFS namespace.

      Set-DfsnRootTarget

      The Set-DfsnRootTarget cmdlet sets configuration settings to specified values for a namespace root target of an existing DFS namespace.

      Remove-DfsnRootTarget

      The Remove-DfsnRootTarget cmdlet deletes an existing namespace root target of a DFS namespace.

      Here are a few examples:

      • Retrieve the namespace root target information for a domain-based namespace \\corp.Contoso.com\Sales, it has two root targets in this example.

      PS C:\> Get-DfsnRootTarget –Path \\corp.Contoso.com\Sales | Format-List

      Path : \\corp.Contoso.com\Sales

      TargetPath : \\contoso_fs\Sales

      State : Online

      ReferralPriorityClass : sitecost-normal

      ReferralPriorityRank : 0

      Path : \\corp.Contoso.com\Sales

      TargetPath : \\contoso_fs_2\Sales

      State : Online

      ReferralPriorityClass : sitecost-normal

      ReferralPriorityRank : 0

      • Add a new root target \\contoso_fs_3\Sales to an existing domain-based namespace, \\corp.Contoso.com\Sales

      PS C:\> New-DfsnRootTarget –Path \\corp.Contoso.com\Sales -TargetPath \\contoso_fs_3\Sales

      Path : \\corp.Contoso.com\Sales

      TargetPath : \\contoso_fs_3\Sales

      State : Online

      ReferralPriorityClass : sitecost-normal

      ReferralPriorityRank : 0

      PS C:\> Set-DfsnRootTarget –Path \\corp.Contoso.com\Sales -TargetPath \\contoso_fs_2\Sales -ReferralPriorityClass globallow

      Path : \\corp.Contoso.com\Sales

      TargetPath : \\contoso_fs_2\Sales

      State : Online

      ReferralPriorityClass : global-low

      ReferralPriorityRank : 0

      • Remove a domain-based namespace root target \\contoso_fs_2\Sales for a domain namespace \\corp.Contoso.com\Sales

      PS C:\> Remove-DfsnRootTarget -Path \\corp.Contoso.com\Sales –TargetPath \\contoso_fs_2\Sales

      Namespace server-scoped

      These two cmdlets operate on the namespace server overall – support Get/Set on the “DfsnServerConfiguration” object.

      Cmdlet

      Description

      Get-DfsnServerConfiguration

      The Get-DfsnServerConfiguration cmdlet retrieves the configuration settings of the specified DFS namespace server.

      Set-DfsnServerConfiguration

      The Set-DfsnServerConfiguration cmdlet modifies configuration settings for the specified server hosting DFS namespace(s).

      Here are a few examples:

      • Retrieve the namespace server configuration

      PS C:\> Get-DfsnServerConfiguration –ComputerName contoso_fs | Format-List

      ComputerName : contoso_fs

      LdapTimeoutSec : 30

      PreferLogonDC :

      EnableSiteCostedReferrals :

      EnableInsiteReferrals :

      SyncIntervalSec : 3600

      UseFqdn : False

      • Set the Sync interval for the namespace server contoso_fs to 7200 seconds

      PS C:\> Set-DfsnServerConfiguration –ComputerName contoso_fs -SyncIntervalSec 7200 | Format-List

      ComputerName : contoso_fs

      LdapTimeoutSec : 30

      PreferLogonDC :

      EnableSiteCostedReferrals :

      EnableInsiteReferrals :

      SyncIntervalSec : 7200

      UseFqdn : False

      Namespace Folder-scoped

      This set of cmdlets operates on a DFS namespace folder path. In addition to Get/Set/New/Remove operations on a “DfsnFolder” object, renaming (Move) is also supported. Further, retrieving (Get), setting (Grant), revoking (Revoke) and removing (Remove) enumerate access on namespace folders is also supported through this set of cmdlets.

      Cmdlet

      Description

      New-DfsnFolder

      The New-DfsnFolder cmdlet creates a new folder in an existing DFS namespace with the specified configuration settings.

      Get-DfsnFolder

      The Get-DfsnFolder cmdlet retrieves configuration settings for the specified, existing DFS namespace folder.

      Set-DfsnFolder

      The Set-DfsnFolder cmdlet modifies settings for the specified existing DFS namespace folder with folder targets.

      Move-DfsnFolder

      The Move-DfsnFolder cmdlet moves an existing DFS namespace folder to an alternate specified location in the same DFS namespace.

      Grant-DfsnAccess

      The Grant-DfsnAccess cmdlet grants access rights to the specified user/group account for the specified DFS namespace folder with folder targets.

      Get-DfsnAccess

      The Get-DfsnAccess cmdlet retrieves the currently configured access rights for the specified DFS namespace folder with folder targets.

      Revoke-DfsnAccess

      The Revoke-DfsnAccess cmdlet revokes the right to access a DFS namespace folder with folder targets or enumerate its contents from the specified user or group account.

      Remove-DfsnAccess

      The Remove-DfsnAccess cmdlet removes the specified user/group account from access control list (ACL) of the DFS namespace folder with folder targets

      Remove-DfsnFolder

      The Remove-DfsnFolder cmdlet deletes an existing DFS namespace folder with a folder target.

      Here are some examples for this set of cmdlets:

      • Create a new namespace folder data1 under a domain-based namespace \\corp.Contoso.com\Sales pointing to a folder target of \\contoso_fs\df1 and using client failback mode

      PS C:\> New-DfsnFolder -Path \\corp.Contoso.com\Sales\data1 -TargetPath \\contoso_fs\df1 -Description "My Data set 1" -EnableTargetFailback $true | Format-List

      Path : \\corp.Contoso.com\Sales\data1

      Description : My Data set 1

      State : Online

      Flags : Target Failback

      TimeToLiveSec : 300

      • Get the properties of a namespace folder data1

      PS C:\> Get-DfsnFolder -Path \\corp.Contoso.com\Sales\data1 | Format-List

      Path : \\ corp.Contoso.com\Sales\data1

      Description : My Data set 1

      State : Online

      Flags : Target Failback

      TimeToLiveSec : 300

      • Set the EnableInsiteReferrals property of a namespace folder data1 (this example of combining target failback with in-site-only referrals makes practical sense only if there are multiple folder targets for data1 that are in the same site as the client)

      PS C:\> Set-DfsnFolder -Path \\corp.Contoso.com\Sales\data1 -EnableInsiteReferrals $true | Format-List

      Path : \\ corp.Contoso.com\Sales\data1

      Description : My Data set 1

      State : Online

      Flags : {Target Failback, Insite Referrals}

      TimeToLiveSec : 300

      • Rename a namespace folder data1 to dataset1

      PS C:\> Move-DfsnFolder -Path \\corp.Contoso.com\Sales\data1 -NewPath \\corp.Contoso.com\Sales\dataset1 -Force

      • Grant enumerate access to User22 for the folder dataset1

      PS C:\> Grant-DfsnAccess -Path \\corp.Contoso.com\Sales\dataset1 -AccountName Contoso\User22 | Format-List

      Path : \\corp.Contoso.com\Sales\dataset1

      AccountName : Contoso\User22

      AccessType : enumerate

      • Get ACLs for namespace folder dataset1 (let’s say User44 was also granted access to the same folder)

      PS C:\> Get-DfsnAccess -Path \\corp.Contoso.com\Sales\dataset1 | Format-List

      Path : \\corp.Contoso.com\Sales\dataset1

      AccountName : Contoso\User22

      AccessType : enumerate

      Path : \\corp.Contoso.com\Sales\dataset1

      AccountName : Contoso\User44

      AccessType : enumerate

      • Revoke access for User22 for namespace folder dataset1

      PS C:\> Revoke-DfsnAccess -Path \\corp.Contoso.com\Sales\dataset1 -AccountName Contoso\User22 | Format-List

      Path : \\corp.Contoso.com\Sales\dataset1

      AccountName : Contoso\User22

      AccessType : none

      Path : \\corp.Contoso.com\Sales\dataset1

      AccountName : Contoso\User44

      AccessType : enumerate

      • Remove a user User22 from the access control list for namespace folder dataset1

      PS C:\> Remove-DfsnAccess -Path \\corp.Contoso.com\Sales\dataset1 -AccountName Contoso\User22

      A Get-DfsnAccess on the same path would now show the following:

      Path : \\corp.Contoso.com\Sales\dataset1

      AccountName : Contoso\User44

      AccessType : enumerate

      • Remove a namespace folder dataset1

      PS C:\> Remove-DfsnFolder -Path \\corp.Contoso.com\Sales\dataset1 -Force

      Namespace Folder Target-scoped

      This set of cmdlets operates on one or more folder target(s) of a namespace folder. Specifically, the same four operations Get/Set/New/Remove are supported on the “DfsnFolderTarget” object.

      Cmdlet

      Description

      New-DfsnFolderTarget

      The New-DfsnFolderTarget cmdlet adds a new folder target with the specified configuration settings to an existing DFS namespace folder.

      Get-DfsnFolderTarget

      The Get-DfsnFolderTarget cmdlet retrieves configuration settings of folder target(s) of an existing DFS namespace folder.

      Set-DfsnFolderTarget

      The Set-DfsnFolderTarget cmdlet modifies settings for the folder target of an existing DFS namespace folder.

      Remove-DfsnFolderTarget

      The Remove-DfsnFolderTarget cmdlet deletes a folder target of an existing DFS namespace folder.

      Here are some examples for this set of cmdlets:

      • Add a new namespace folder target \\contoso_fs2\df1 for the namespace folder \\corp.Contoso.com\Sales\dataset1

      PS C:\> New-DfsnFolderTarget -Path \\corp.Contoso.com\Sales\dataset1 -TargetPath \\contoso_fs2\df1 | fl

      Path : \\corp.Contoso.com\Sales\dataset1

      TargetPath : \\contoso_fs2\df1

      State : Online

      ReferralPriorityClass : sitecost-normal

      ReferralPriorityRank : 0

      • Retrieve all the folder targets for the namespace folder \\corp.Contoso.com\Sales\dataset1

      PS C:\> Get-DfsnFolderTarget -Path \\corp.Contoso.com\Sales\dataset1 | fl

      Path : \\corp.Contoso.com\Sales\dataset1

      TargetPath : \\contoso_fs\df1

      State : Online

      ReferralPriorityClass : sitecost-normal

      ReferralPriorityRank : 0

      Path : \\corp.Contoso.com\Sales\dataset1

      TargetPath : \\contoso_fs2\df1

      State : Online

      ReferralPriorityClass : sitecost-normal

      ReferralPriorityRank : 0

      • Set the folder target state for \\contoso_fs2\df1 to offline

      PS C:\> Set-DfsnFolderTarget -Path \\corp.Contoso.com\Sales\dataset1 -TargetPath \\contoso_fs2\df1 -State Offline | fl

      Path : \\corp.Contoso.com\Sales\dataset1

      TargetPath : \\contoso_fs2\df1

      State : Offline

      ReferralPriorityClass : sitecost-normal

      ReferralPriorityRank : 0

      • Remove the folder target \\contoso_fs2\df1 for the namespace folder \\corp.Contoso.com\Sales\dataset1

      PS C:\> Remove-DfsnFolderTarget -Path \\corp.Contoso.com\Sales\dataset1 -TargetPath \\contoso_fs2\df1 -Force

      Conclusion

      I hope this gave you a decent overview of the new DFSN cmdlets in Windows Server 2012. And hope you will start using them soon!

      Just be sure to download the "Windows Server 2012 and Windows 8 client/server readiness cumulative update" before you start working with the DFSN PS cmdlets, as the update includes a couple of fixes related to DFSN PS cmdlets. I am told that this update should be eventually available as a General Distribution Release (GDR) on Windows Update, but why wait? You can download it today and start playing with the cmdlets!

      Server for NFS PowerShell Cmdlets

      $
      0
      0

       

      This post covers the Server for NFS PowerShell cmdlets in Windows Server 2012 with brief description about each of them.

      You can get Server for NFS PowerShell cmdlets by installing “Server for NFS” role service from the File and Storage Services or by installing “Services for Network File System Management Tools” from Remote Server Administration Tools. 

      You can list the entire Server for NFS PowerShell cmdlets by running the following command 

      Get-Command –Module NFS

      The following table lists Server for NFS cmdlets grouped by functionality.

       

      Group

      Cmdlets

      Description

      Share

      Get-NfsShare

      Lists all the shares on the server along with their properties.

      New-NfsShare

      Creates a new share on the server.

      The share can be a standard share or a clustered share.

      Set-NfsShare

      Modifies the share configuration of standard as well as clustered shares.

      Remove-NfsShare

      Deletes NFS shares from the server. The share can be a standard or clustered share.

      Share Permission

      Get-NfsSharePermission

      Retrieves the permissions on a share.

      Grant-NfsSharePermission

      Adds or modifies share permissions.

       

      Permissions can be granted to individual hosts, global (all machines) or to groups such as clientgroup or netgroups.

       

      Allows granting read only, read write or no access to clients.

      Revoke-NfsSharePermission

      Removes permissions for a given client on a share.

      Server Configuration

      Get-NfsServerConfiguration

      Retrieves the Server for NFS configuration.

      Set-NfsServerConfiguration

      Modifies Server for NFS configuration.

      Client Configuration

      Get-NfsClientConfiguration

      Retrieves Client for NFS configuration.

      Set-NfsClientConfiguration

      Modifies Client for NFS configuration. Multiple client configuration properties can be modified at the same time.

      Netgroup Store

      Set-NfsNetgroupStore

      Modifies the netgroup source configuration on the server. The netgroup source can be Active Directory, RFC2307 compliant LDAP server or NIS server.

      Get-NfsNetgroupStore

      Retrieves the netgroup source configuration on the server. The server can be configured to use Active Directory, RFC2307 compliant LDAP or NIS server netgroup stores as its netgroup source.

      Identity Mapping Store

      Get-NfsMappingStore

      Retrieves the identity mapping source on the server or client. The identity mapping source can be configured to use local files such as passwd and group files, Active Directory, RFC2307 compliant LDAP server or a User Name Mapping server as its identity mapping source.

      Set-NfsMappingStore

      Modifies the mapping store on NFS server or client.

      Install-NfsMappingStore

      Installs and configures an Active Directory Lightweight Directory Service server as mapping store.

       

      The cmdlet installs the AD LDS role, creates an instance for the mapping store and also adds the schema required for the UID and GID attributes for user/group objects.

       

      Test-NfsMappingStore

      Verifies if the mapping store on the server has been configured correctly. It verifies that the mapping store is reachable and also checks if necessary schema is installed on the server and if the domain functional level is Windows Server 2003 R2 and above in case of domain based mapping store.

      Identity Mapping

      Get-NfsMappedIdentity

      Lists all the mapping between a user's UNIX and Windows accounts from the identity mapping source.

      The cmdlet can retrieve mapping from various mapping sources. The mapping source can be Active Directory, RFC2307 compliant LDAP server or mapping files (passwd/group files). If the mapping source is not specified, the cmdlet uses server’s mapping source configuration to retrieve the information.

      New-NfsMappedIdentity

      Creates a new mapping between windows user (or group) account to corresponding UNIX identifier.

       

      The mapping store can be Active Directory or RFC2307 compliant LDAP server.

       

      User or group account is created if they don’t exist.

      Set-NfsMappedIdentity

      Modifies or sets a mapping between a Windows user/group account to UNIX identifiers.

      The mapping store can be Active Directory or RFC2307 compliant LDAP server.

      Remove-NfsMappedIdentity

      Removes a mapping from a user or group windows account.

      Resolve-NfsMappedIdentity

      Checks that the server can resolve a mapping for given user or group account name to UNIX identifier and vice versa.

      The server uses its identity mapping source configuration to retrieve the mapping. 

      Test-NfsMappedIdentity

      Verifies existing mapped identities and checks if they are configured correctly.

      The cmdlet will check for duplicate UID\GID and also validates the group membership for user accounts as per GID assignment.

      Client

      Get-NfsMountedClient

      Enumerates the clients connected to Server for NFS using NFS v4.1.

      Revoke-NfsMountedClient

      Revoke a client V4.1 connection to Server for NFS.

      Clientgroup

      Get-NfsClientgroup

      Lists all the clientgroups on the Server for NFS.

      New-NfsClientgroup

      Creates a new clientgroup on the server. Members can also be added to the new clientgroup at the time of creation.

      Set-NfsClientgroup

      Adds or removes members from a clientgroup. Multiple members can be added or removed in a single command.

      Rename-NfsClientgroup

      Renames a clientgroup.

      Remove-NfsClientgroup

      Deletes a clientgroup from the server.

      Netgroup

      Get-NfsNetgroup

      Enumerates netgroups configured in Active Directory, RFC2307 compliant LDAP server or NIS server.

      Remove-NfsNetgroup

      Deletes a netgroup from Active Directory or LDAP server.

      New-NfsNetgroup

      Creates a new netgroup in Active Directory or LDAP server. Members can also be added to netgroup at the time of creation.

      Set-NfsNetgroup

      Adds or removes members from a netrgroup. The netgroup store can be Active Directory or RFC2307 compliant LDAP server.

      Lock

      Get-NfsClientLock

      List the locks opened by a client on the server. The cmdlet lists both the NLM and NFS v4.1 byte range locks.

      Revoke-NfsClientLock

      Revokes locks on a given set of files or locks held by a given client computer.

      Open File

      Revoke-NfsOpenFile

      Revokes open state and handles for files opened by clients using NFS V4.1 to Server for NFS.

      Get-NfsOpenFile

      Enumerates file opened using NFS V4.1 on Server for NFS.

      Session

      Disconnect-NfsSession

      Disconnect an NFS V4.1 session on Server for NFS.

      Get-NfsSession

      List the currently open V4.1 sessions on Server for NFS.

      Statistics

      Reset-NfsStatistics

      Resets the statistics on Server for NFS.

      Get-NfsStatistics

      Enumerates NFS and MOUNT statistics on the server

       

      Feedback

      Please send feedback you might have to nfsfeed@microsoft.com

      Backup and Restore of Server for NFS Share Settings

      $
      0
      0

      Introduction

      Windows Server 2012 ships with a rich set of PowerShell cmdlets to perform most of the Server for NFS share management operations. Using these cmdlets as building blocks, administrators can easily build backup and restore scripts for the NFS share settings and permissions that best suits their needs. This post demonstrates how Export-CliXml and Import-CliXml cmdlets can be used to backup and restore Server for NFS share settings.

      Server for NFS share management cmdlets

       Let us take a quick look at the Server for NFS share management cmdlets in Windows Server 2012. If you are not already familiar with these cmdlets please look at their help to get more details on using these cmdlets.

      Share cmdlets

       

      •  Get-NfsShare – Enumerates shares on the server
      •  New-NfsShare – Creates a new share
      •  Remove-NfsShare – Deletes one or more shares
      •  Set-NfsShare – To modify the share settings

      Share permission cmdlets

       

      • Get-NfsSharePermission – Enumerates NFS share permissions
      • Grant-NfsSharePermission – To add or modify share permissions
      • Revoke-NfsSharePermission – To remove permission for a client

      Exporting Shares Settings

      The Export-CliXml cmdlet can be used to export the objects to XML files. Similarly the Import-CliXml cmdlet can be used to import the content of an XML files (generated using Export-CliXml) to their corresponding objects in Windows PowerShell. If you are not familiar with these cmdlets please refer to the PowerShell help section.

      To export all the NFS shares on the server invoke Get-NfsShare and pipe the results to Export-CliXml cmdlet.

      Get-NfsShare | Export-CliXml -Path c:\shares.xml


      Running the above command saves only the share settings into the file. It does not save share permissions. Exporting share permission is covered in next section as the permissions are handled using another set of cmdlets.

      The following share settings are exported to the file 

      Name

      NetworkName

      Path

      IsClustered

      IsOnline

      AnonymousAccess

      AnonymousGid

      AnonymousUid

      Authentication

      UnmappedUserAccess

       

      To export a single share, use the following command. In addition you can also filter the shares that you want to save by using wild cards for the share name, path and network name. See Get-NfsShare help for more details on using it.

      Get-NfsShare -name shareA | Export-CliXml -Path c:\shares.xml

       

      Exporting Share Permissions

       To save permissions of all the shares, use Get-NfsShare first to enumerate all the shares on the server. Use a PowerShell pipeline to pass the result to Get-NfsSharePermission to get the permissions for each of the shares. The result of these two commands can then be saved to an export file using the Export-CliXml cmdlet.

      Get-NfsShare | Get-NfsSharePermission | Export-CliXml -Path c:\SharePermissions.xml


       Similarly to export the permissions of a single share specify the share name in Get-NfsSharePermission. 

      Get-NfsSharePermission shareA | Export-CliXml -Path c:\SharePermissions.xml

       

      Importing Shares Settings

       If you have an export file generated using the Export-CliXml containing the share configuration follow these steps to import them on Server for NFS running Windows Server 2012.

       We have the choice of using either New-NfsShare or Set-NfsShare when performing the import operation. Use Set-NfsShare if the share already exists on the server and you would like to overwrite it with settings from the exported file.

       To create new shares on the server using the export file, use Import-CliXml to read the export file and create objects that can be given as input to New-NfsShare. The following example creates new shares on the server.

      Import-CliXml c:\shares.xml | New-NfsShare


      The screen shot shows creation of new shares on the server after importing the shares from the export file.

       If the shares already exist on the server, the settings of the shares can be restored to that of the export file by piping the output of Import-CliXml to Set-NfsShare cmdlet

      Import-CliXml c:\shares.xml | Set-NfsShare

      Here is an example to show how this works. 

      The server has a share with name "ShareA". The "AnonymousGid" and "AnonymousUid" properties of this share are -2 and -2 respectively. The export file "shares.xml" is imported and the share is modified using Set-NfsShare cmdlet which changes the "AnonymousGid" and "AnonymousUid" property of the share to 100 and 200 respectively. 

       

      Importing Share Permissions

      Before we talk about importing share permissions let’s briefly look at the cmdlets that will be used to perform this operation. 

      A client can be granted readwrite, readonly or no access permission on a share. The client referred to here can be a host machine or a group such as netgroup or clientgroup. Grant-NfsSharePermission cmdlet can be used to add permission for a client if it doesn’t already exist. If permission for a client is already present on the share it can also be modified using same Grant-NfsSharePermisison cmdlet.  

      To remove client from the list of permissions on a share, use Revoke-NfsSharePermission. 

      Now let’s get back to importing share permissions from the file. If you have an exported file generated using Export-CliXml that contains the permissions for the shares, use this command to import those permissions.

      Import-CliXml C:\sharespermission.xml | foreach{ $_ | Grant-NfsSharePermission}

      The import command used above has following implications  

      1. Permissions are added to the share if the permissions do not exist
      2. If the permission being imported exists both on the share and in the export file, then the client permission on the share will be overwritten
      3. If the permission being imported exists only the share but not in the export file, then the client permission on the share will be retained

      Note: If you don’t want to retain existing share permissions on the share, remove them before importing from the file.

       Example:

       In this example, shareA has following share permissions 

      • Host machine "nfs-node1" has read write access and root access disabled
      • Global permission also known as "All Machines" has read only access and root access is enabled.

       The exported file has  

      • Host machine "nfs-fileserver" has read only permission
      • "All Machines" has no access and root access is disabled.

       After the import operation, the share permissions for ShareA would be 

      • The permission for host “nfs-fileserver” is added to the share
      • The "All Machines" permission has changed from read only and root access enabled to no access and root access disabled.
      • The permission for “nfs-node1” is retained and is not modified. 

       

       

       

      Feedback

      Please send feedback you might have to nfsfeed@microsoft.com

       

      Viewing all 268 articles
      Browse latest View live




      Latest Images