Quantcast
Channel: Storage at Microsoft
Viewing all 268 articles
Browse latest View live

Dell PowerEdge VRTX: 4-Node Cluster-in-a-Box That Can Be Deployed in 45 Minutes

$
0
0

Hi Folks –

Dell’s latest server, the PowerEdge VRTX Shared Infrastructure Platform is an incredible cluster-in-a-box that delivers highly-available services and storage in one beautiful package. There are some great innovations in this chassis, for sure; the shared PCI bus and virtualized PERC8 storage controllers are really ground breaking stuff. However, what really has me impressed is the price. On the Dell website today, they have this chassis and two blades available for under $10,000, which is about as low a price as you can find for a Windows Server 2012 cluster with hardware RAID!

 

image

Perfect for Office Environments

Dell bills the PowerEdge VTRX as a solution that “redefines office IT.” And I’m inclined to agree: its innovative, converged design is based on a modular chassis that supports 4 compute nodes and up to 48TB of storage in a 5U rack-mount enclosure—or you can turn it sideways and it’s a tower that fits under your desk. It’s great for small businesses, which often face space constraints, hardware sprawl, low tolerance for heat and noise, and a lack of datacenter-class power or cooling. The PowerEdge VRTX is built to address all these issues, making it easy to deploy compute and storage resources wherever you need them.

A Four-Node Cluster-in-a-Box That Can Be Deployed in 45 Minutes

The speed with which the PowerEdge VRTX can be deployed is just as impressive as its physical design. Last year, I blogged about how the OEM Appliance OOBE (out-of-the-box experience) originally developed for Windows Storage Server was being included in Windows Server 2012 (Standard and Datacenter Editions) as well as both editions of Windows Storage Server 2012. At that time, it supported the deployment of standalone servers or 2-node failover clusters. This spring, the Windows team released an update that adds support for 4-node clusters. During its development, we worked closely with Dell using original prototypes of the hardware.

Dell is including that update with the PowerEdge VRTX to deliver a four-node cluster-in-a-box that can be deployed in 45 minutes. After powering-on the system, the user can simply follow the sequence of tasks provided by Dell, which include configuring the network, joining a domain; provisioning some storage, and creating the cluster. Here is a screenshot that shows the Initial Configuration Tasks window:

image

And here’s a great shot of the Dell booth at TechEd, where they highlighted the PowerEdge VRTX and how quickly it can be deployed.

image

Virtualized Storage Controller

Dell’s innovative PCI-e virtualization technology and PERC8 storage controller enable simultaneous access to storage from each server node. This reduces the overhead in accessing shared storage and also reduces the total storage needed by allowing each server node to share the same physical storage resources.

Such innovations make the PowerEdge VRTX a great Hyper-V host. You can easily scale-up RAM, add more CPUs, and install additional PCI-e add-in cards (including several 10GbE options from Intel and Broadcom) to run numerous VMs at once, handle huge SQL instances, or support thousands of IIS workloads—all while migrating running VMs from one node to another at lightning speed. You can easily assign the shared PCI-e slots to any of the nodes using the system’s Chassis Management Controller (CMC).

Dell recently released a reference architecture to help customers deploy virtualized desktop infrastructure (VDI) workloads on the PowerEdge VRTX using Hyper-V. The reference architecture includes sizing guidance for two default configurations: two M620 blades and 15 disks, which is designed to support 250 virtual Windows 8 desktops; and four M620 blades and 25 disks, which is designed to support 500 virtual Windows 8 desktops. You can configure and price these solutions using Dell’s Solutions Configurator.

System Specifications

Detailed system specifications for the PowerEdge VRTX can be found on the Dell website or the downloadable data sheet. Here are a few highlights:

image


Recap

Put simply, the PowerEdge VRTX has the potential to be a game-changer when it comes to office IT. I’m hard-pressed to think of another system that can match all the benefits that the PowerEdge VRTX has to offer, including:

  • Optimized for office environments. The PowerEdge VRTX is a compact, shared infrastructure platform with office-level acoustics and 110V power support. It provides front-access KVM, USB, LCD display, and optional DVD-RW, and can be deployed as a standalone tower or a 5U rack enclosure.
  • Converged servers, storage and networking. The PowerEdge VRTX combines servers, storage, and networking to provide impressive performance and capacity. A single chassis supports up to four dual-socket PowerEdge M520 or M620 server blades, up to 8 PCI devices, and up to 48 TB storage capacity. It is very easy to add additional blades to scale the system if you start with 2 or 3 blades.
  • Flexible shared storage. All four server nodes have access to low-latency, shared internal storage, making the PowerEdge VRTX ideal for virtualization and clustering, highly economical, and easier to manage than traditional SANs. A single chassis can support up to 12 3.5” HDDs (48TB max) to scale for capacity or up to 25 2.5” HDDs (30TB max) to scale for performance.
  • Integrated networking and flexible I/O. The PowerEdge VRTX includes an embedded gigabit Ethernet switch, eliminating the need to purchase a separate networking device. An optional pass-through Ethernet module option with eight GbE ports supports up to 8Gb aggregate bandwidth. PCIe resources are shared across the compute nodes within the chassis.
  • Simple, efficient systems management. Arich, unified system management console reduces administration time and effort, enabling you to deploy, monitor, update, and maintain the system through a single pane-of-glass that covers servers, storage and networking. Dell OpenManage 1.2 with Chassis Management Controller (CMC) and GeoView enable you to monitor all PowerEdge VRTX systems, anywhere on your network.
  • Seamless management integration. PowerEdge VRTX systems management integrates with major third-party management tools, allowing you to use what you already know and own. This makes it easy to deploy VRTX into infrastructures already managed by Dell OpenManage or third-party management solutions, such as Microsoft System Center and VMware vCenter.

If the above has you interested in Dell’s PowerEdge VRTX, the Shared Infrastructure page on the Dell website is a good place to start. I think you’ll be impressed… I know that I am.

Cheers,
Scott M Johnson
Senior Program Manager
Windows Server OEM Appliance OOBE


Managing iSCSI Target Server through Storage Cmdlets

$
0
0

Context

Windows Server 2012 R2 ships with a rich set of standards-compliant storage management functionality. This functionality was originally introduced in Windows Server 2012, and you should reference Jeff’s excellent blog that introduced related concepts.

iSCSI Target Server on its part shipped its SMI-S provider as part of the OS distribution for the first time in Windows Server 2012 R2. For more details on its design and features, see my previous blog post iSCSI Target Server in Windows Server 2012 R2.

I will briefly summarize here how the iSCSI Target Server SMI-S provider fits into Storage management ecosystem, please refer to Jeff’s blog post for the related deep-dive discussion. Storage cmdlets are logically part of the Storage Management API (SM-API) layer, and the ‘Windows Standards-based Storage Management Service’ plumbs the Storage cmdlet management interactions through to the new iSCSI Target Server SMI-S provider. This blog post is all about how you can use the new SMI-S provider hands-on, using the SM-API Storage cmdlets. Note that iSCSI Target Sever can alternatively be managed through its native iSCSI-specific cmdlets, and related WMI provider APIs. While the native approach allows you to comprehensively manage all iSCSI-specific configuration aspects, the Storage cmdlet approach helps you normalize on a standard set of cmdlets across different technologies (e.g. Storage Spaces, 3rd party storage arrays).

Before we jump into the discussion of Storage cmdlet-based management, you should keep two critical caveats in mind:

  • Jeff’s caution about having multiple management clients potentially tripping up each other; to quote –

“You should think carefully about where you want to install this service – in a datacenter you would centralize the management of your resources as much as practicable, and you don’t want to have too many points of device management all competing to control the storage devices. This can result in conflicting changes and possibly even data loss or corruption if too many users can manage the same arrays.”

  • You must decide how you want to manage a Windows-based iSCSI Target Server starting from the time the feature is installed on Windows Server. To begin with, you are free to choose from either WMI/PowerShell, or SM-API/SMI-S. But once you start using one management approach, you must stick with that approach until the iSCSI Target Server is decommissioned. Switching between the management approaches could leave the iSCSI Target Server in an inconsistent and unmanageable state, and potentially could even cause data loss.

For most users, the compelling reason for managing an iSCSI Target Server through SMI-S is usually one of the following:

  1. They have existing scripts that already are written to the Windows Storage cmdlets and the Windows SMI-S cmdlets, and want to use the same scripts to manage Windows Server-based iSCSI Target Server, or,
  2. They use 3rd party storage management products that consume the Storage Management API (SM-API) and they plan to manage Windows Server-based iSCSI Target Server using that same 3rd party software, or,
  3. (Perhaps most likely) They use SCVMM to manage iSCSI Target Server-based storage through SMI-S. This can be accomplished using SCVMM-specific cmdlets and UI, and is covered in detail elsewhere. So it will not be the focus of this blog post, we will focus only on non-SCVMM management approach in this blog post.

Relating Terminology

Let us do a quick review of SM-API/SMI-S concepts so you can intuitively relate them to native Windows terminology as we move into hands-on discussion:

SM-API or SMI-S concept

iSCSI Target Server implementation term

More Detail

SMI-S Provider

WMI provider

Manageability end point for the iSCSI Target Server; an iSCSI Target Server SMI-S provider is actually also built off WMI architecture under the covers.

Storage Pools

Hosting Volume where the VHD files are stored

Storage pools in SMI-S subdivide the total available capacity in the system into groups as desired by the administrator. In the iSCSI Target Server design, each virtual disk is persisted on a file system hosted on a Windows volume.

Storage Volume

iSCSI Virtual Disk (SCSI Logical Unit)

An SMI-S Storage Volume is the allocation of storage capacity exposed by the storage system - a storage volume is provisioned out of a storage pool. Windows Server Storage Service implementation calls this a ‘Virtual Disk’. iSCSI Target Server design calls this an iSCSI virtual disk, see New-IscsiVirtualDisk

Masking operation

Removal of a mapping

Masking of a storage volume removes access to that SCSI LU from an initiator. In iSCSI Target Server design, it is not possible to selectively mask a single LU from an individual initiator, although a single LU can be removed from the SCSI Target (Remove-iSCSIVirtualDiskTargetMapping). The access privilege can then be removed from an initiator at the target scope.

Unmasking operation

Adding a mapping

Unmasking of a storage volume grants access to that SCSI LU to an initiator. In iSCSI Target Server design, it is not possible to selectively unmask a single LU from an individual initiator, although a single LU can be added to SCSI Target (Add-iSCSIVirtualDiskTargetMapping). The access privilege can then be granted to an initiator at the target scope.

SCSI Protocol Controller (SPC)

SCSI Target

SCSI Protocol controller refers to the initiator view of the target. In Windows Server Storage Service implementation, this is logically equivalent to a masking set, which then iSCSI Target Server realizes as a SCSI Target, see New-IscsiServerTarget

Snapshots

Snapshots

The terminology is the same on this one, but there are a couple of critical differences to keep in mind between the volsnap-based iSCSI virtual disk snapshots that you can create with a Checkpoint-IscsiVirtualDisk, versus the Diff VHD-based snapshot on the original VHD that you can create with a New-VirtualDiskSnapshot. The former is a read-only snapshot, whereas the latter is a writable snapshot. And be aware that you cannot manage a snapshot taken in one management approach (say, WMI) via the tools in the other approach (say, SMI-S).

Storage Subsystem

iSCSI Target Server

This is a straightforward mapping for standalone iSCSI Target Servers where the iSCSI SMI-S provider implementation is simply an embedded SMI-S provider just for that target server. In the case of a clustered iSCSI Target Server however, the SMI-S provider at the client access point reports not only the storage subsystems (iSCSI Target Server resource groups) owned by that cluster node, but also any additional iSCSI Target Server resource groups owned by rest of the failover cluster nodes – reporting each as a storage subsystem. Put differently, the SMI-S provider then acts like an embedded provider for that cluster node, and as a proxy SMI-S provider for the rest of that cluster.

Register SMI-S Provider on a Management Client

To register an SMI-S provider, you need to know the provider’s URI – it is the machine name for a standalone target and the cluster resource group name in the case of a clustered iSCSI Target Server – and credentials for a user account that is in that target server’s local administrators security group. In the following example, the SMI-S provider can be accessed on the machine “fsf-7809-09” and the user account with administrative privileges is “contoso\user1”. Get-Credential cmdlet prompts for the password at run time (you can reference Get-Credential page for other more scripting-friendly, albeit less secure, options to accomplish the same).

image

Discover the Storage Objects

After registering the provider, you can now update storage provider cache to get an inventory of all the manageable storage objects through this SMI-S provider:

image

You can then list the storage subsystem and related details. Note that although the following screen shots show items related to Storage Spaces, they are unrelated to iSCSI Target Server SMI-S provider. iSCSI Target Server SMI-S provider items are highlighted in green.

image

You can inspect the available storage pools and filter by the friendly name, as shown in the example. Notice that the hosting volumes, which you already know iSCSI Target Server reports as storage pools, carry friendly names that include respective drive letters on the target server. Also notice the Primordial storage pool, which effectively represents entire capacity available on the iSCSI Target Server. However, keep in mind that you can create SCSI Logical Units only out of what SMI-S calls “concrete storage pools”, i.e. only from pools which have the ‘IsPrimordial’ attribute set to false in the following screen shot.

image

Create Storage Objects

The first operation we show carves out a new logical disk out of an existing concrete storage pool. You can use one of two cmdlets to create a new virtual disk: New-VirtualDisk and New-StorageSubsystemVirtualDisk. Technically, iSCSI Target Server SMI-S provider works fine with either cmdlet and we will show examples for both, although you probably want to use the New-VirtualDisk so you can intentionally select the storage pool to provision the storage volume from. New-StorageSubsystemVirtualDisk in contrast auto-selects the storage pool to provision the capacity from.

image

Or you can provision the possible maximum-sized virtual disk by using “-UseMaximumSize” parameter as shown:

image

If you prefer to use the New-StorageSubsystemVirtualDisk cmdlet, you need to specify the storage subsystem parameter, and in the example below, you can see it auto-selected a storage pool in the selected subsystem - the “iSCSITarget: FSF-7809-09: C:” pool.

image

With the New-VirtualDiskSnapshot cmdlet, you can take a snapshot of a virtual disk.

image

To create masking set for a storage subsystem, you can use New-MaskingSet. You must include at least one initiator access in the new masking set. You can also map one or multiple virtual disks to the new masking set. An iSCSI initiator is identified by its iqn name and the virtual disks through their names. The script below creates a new masking set and adds one initiator and two virtual disks. And then we will query the new masking set to confirm the details.

image

Modify and Remove the Storage Objects

With the Resize-VirtualDisk cmdlet, you can expand an existing virtual disk as shown in the following example. Note however that you will still need to extend the partition and the volume to be able to make the additional capacity usable.

image

You can also modify the masking set that you’ve just created, by adding additional virtual disks or additional initiators to it. Look at the following examples. Note however that you do not really want to share a virtual disk with a file system across multiple initiators unless the initiators (hosts) are clustered, else the setup will inevitably cause data corruption sooner or later!

image

You can of course also remove the masking set and virtual disks that you just created, as we illustrate in the following examples. Further, note also the order of operations – you have to remove a virtual disk first from masking sets (or remove the masking sets), and then delete the virtual disk.

image

Finally, when you no longer need to manage the iSCSI Target Server from this management client, you can unregister the SMI-S provider as shown in the following example.

image

I want to acknowledge my colleague Juan Tian who had helped me with all the preceding Windows PowerShell examples.

Finally, I sincerely hope that you now have the tools you need to work with an iSCSI Target Server SMI-S provider. Give it a try and let me know how it’s working for you!

iSCSI Target Server in Windows Server 2012 R2 for VMM Rapid Provisioning

$
0
0

Context

iSCSI Target Server shipped its SMI-S provider as part of the OS distribution for the first time in Windows Server 2012 R2. For more details on its design and features, see my previous blog post iSCSI Target Server in Windows Server 2012 R2.

System Center 2012 SP1 Virtual Machine Manager (VMM) and later versions manage the storage presented from an iSCSI Target Server for provisioning block storage to Hyper-V hosts. VMM configuration guidance for managing the iSCSI Target Server running on Windows Server 2012 is available in this TechNet page. This guidance is still accurate for Windows Server 2012 R2 with the two following exceptions.

  1. iSCSI Target Server SMI-S provider is now included as part of the OS distribution, so you no longer need to install it from VMM media. In fact, only the SMI-S provider included in Windows Server 2012 R2 distribution is the compatible supported provider. Further, when you install the iSCSI Target Server feature, the right SMI-S provider is transparently installed.
  2. SAN-based Rapid Provisioning scenario of VMM requires one additional step to work with iSCSI Target Server

The rest of this blog post is all about #2.

VMM SAN-based Rapid Provisioning

VMM SAN-based rapid provisioning, as the name suggests, helps an administrator rapidly provision new Hyper-V virtual machines. The key to this fast provisioning is copying the VHD files for the new virtual machine in the most efficient possible manner. In this case, VMM relies on iSCSI Target Server snapshot functionality to accomplish this. Specifically, iSCSI Target Server SMI-S provider exposes this snapshot functionality for usage by SM-API storage management framework, which VMM then uses to create iSCSI Virtual Disk snapshots. As a brief aside, check out my previous blog post for examples on how the same iSCSI SMI-S snapshot functionality can be used by a storage administrator directly via SM-API Storage cmdlets, outside of VMM.

Let’s focus back on VMM though, especially on the snapshot-related VMM rapid provisioning work flow and what each of these steps mean to the iSCSI Target Server:

  1. Administrator creates, customizes, and generalizes (syspreps) the desired VM OS image on a storage volume, hosted on iSCSI Target Server storage
    • iSCSI Target Server perspective: It simply exposes a VHDX-based Virtual Disk as a SCSI disk to the connecting initiator. All the creation, customization and sysprep actions are simply I/Os on that SCSI Logical Unit (LU).
  2. Administrator mounts that SCSI LU on the VMM Library Server, let’s call it Disk-G for Golden, hosting the storage volume. Administrator also makes sure to mask the LU from any other initiators.
    • iSCSI Target Server perspective: Disk-G is persisted as a VHDX format file on the hosting volume, but the initiator (Library Server) does not know or care about this server-side implementation detail
  3. Administrator creates a VM template and associates the generalized SAN copy-capable OS image VHD files to this template. This process thus makes the template a SAN copy-capable VM template.
    • iSCSI Target Server perspective: This action is transparent to the iSCSI Target Server, it does not participate unless there are specific related I/Os to Disk-G
  4. From this point on, VMM can rapidly provision each new VM by creating a snapshot of Disk-G (say Disk-S1, Disk-S2 etc.) and assigning it to the appropriate Hyper-V host that will host the new VM guest being instantiated.
    • iSCSI Target Server perspective: For each disk snapshot taken via SMI-S, iSCSI Target Server creates a Diff VHDX file to store its content, so effectively:
      • Disk-G image parent VHDX file
      • Disk-S1 image Diff VHDX (Disk-G is parent)
      • Disk-S2 image Diff VHDX (Disk-G is parent)

For a more detailed discussion of SAN-based VMM rapid provisioning concepts, see this TechNet Library article.

The entire scenario of course works flawlessly both in Windows Server 2012 and Windows Server 2012 R2. However in Windows Server 2012 R2, it turns out the storage administrator needs to take one additional step between Steps #3 and #4 – let’s call it “Step 3.5” – in the preceding list. Let’s then discuss what exactly changed in Windows Server 2012 R2 and what the additional step is.

iSCSI Target Server SMI-S Snapshots in Windows Server 2012 R2

On each successful SMI-S snapshot request, iSCSI Target Server creates a Diff VHDX-based iSCSI Virtual Disk. In Windows Server 2012 R2, iSCSI Target Server realizes this through native Hyper-V APIs. In contrast, iSCSI Target Server used to have its own implementation of creating Diff VHD files back in Windows Server 2012 – see my discussion of redesigned persistence layer in Windows Server 2012 R2 in one of my earlier blog posts for more detail. The new Hyper-V APIs enforce that the parent VHDX file must not be open in read & write mode while the new Diff VHDX is being created. This is to ensure that the parent VHDX can no longer be written to, once the diff VHDX is created. Thus while creating Disk-S1/S2 iSCSI Target Server SMI-S snapshots in the example discussion, Disk-G cannot stay mounted for read/write by the Library Server. Disk-G must be unmounted and re-mounted as read-only disk first – otherwise, creation of snapshots Disk-S1 and Disk-S2 will fail.

Now you might be wondering why this wasn’t an issue in Windows Server 2012-based iSCSI Target Server. iSCSI Target Server’s private implementation of Diff VHD creation in Windows Server 2012 did not enforce the read-only requirement on the parent VHD file, but the VMM Library server (initiator) always ensures that no more writes are performed on Disk-G once the sysprep process is complete. So the overall solution worked just fine in Windows Server 2012. With Windows Server 2012 R2 though, in addition to the same initiator behavior, iSCSI Target Server is effectively adding an additional layer of safety on the target (server) side to ensure writes are simply not possible at all on Disk-G. This is additional goodness.

I have briefly alluded to the nature of the additional process step required for rapid provisioning, but here’s the complete list of actions within that additional step:

“Step 3.5”:

  • Save the volume mount point (and the drive letter if applicable) and offline the iSCSI LU on the Library Server.
  • Unmap the LU from its current masking set (Remove-VirtualDiskFromMaskingSet). This ensures that the LU under the previous read/write access permissions can no longer be accessed by any initiator.
  • Re-add the same SCSI LU (Add-VirtualDiskToMaskingSet) back to the same masking set, albeit this time as read-only through the “-DeviceAccesses ReadOnly” PS parameter. This sets the disk access to Read-Only.
    • Note: Only one Library Server should have the volume mounted off that SCSI LU. Even if the Library Server is configured as a highly-available failover cluster, only one of the cluster nodes should have mounted the disk at one time.
  • Online the SCSI LU on the Library Server and restore its previous mount point (and drive letter, if applicable)

Here is the good news. My colleague Juan Tian has written a sample Windows PowerShell script which takes a single VMM template name parameter and performs all the above actions in the “Step 3.5” in one sweep. The script should work without any changes if you run it with VMM administration credentials. Feel free to check it out at the bottom of the blog post, customize if necessary for your deployment, and be sure to run it as “Step 3.5” in the VMM rapid deployment work flow that I summarized above.

Finally, let me wrap up this blog post with a quick version compatibility reference:

VMM Version

VMM Runs on

Manages

Compatible?

VMM 2012 SP1

WS2012

iSCSI on WS2012

Yes

VMM 2012 SP1

WS2012

iSCSI on WS2012 R2

Yes

VMM 2012 R2

WS2012 R2

iSCSI on WS2012 R2

Yes

VMM 2012 R2

WS2012

iSCSI on WS2012 R2

Yes

VMM 2012 SP1

WS2012 R2

<Any>

No

Hope this blog post provided you all the required details you need to move to production with your Windows Server 2012 R2-based new iSCSI Target Server and VMM. Give it a try and let me know how it’s working for you!

 

>>>>>>>>>SetLibraryServerLUToReadOnly.ps1 Windows PowerShell script>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

# Description:

# Script to set a VMM Library Server access of a SCSI Logical Unit (LU) to read-only access.

# Library Server and the disk (SCSI LU) are identified based on the VM template name parameter

# Script always offlines the disk, removes it from the masking set, and re-adds the disk back in as read-only.

# Script finally re-mounts the newly-read-only disk on the Library Server.

# Must be run with the VMM administration credentials

#

param([string] $VMTemplate = "")

if(!$VMTemplate)

{

$VMTemplate = Read-Host "Enter the name of your Template: "

}

Write-Host "Get Template $VMTemplate"

$libShare=(Get-SCLibraryShare -ID ((Get-SCVMTemplate -name $VMTemplate | Get-SCVirtualHardDisk).libraryshareid))

$share=$libShare.path

if ($share.count -lt 1)

{

Write-Host "Cannot find libraryshare!"

exit 1

}

Write-Host "Get libraryshare $share"

$path=(Get-SCVMTemplate -name $VMTemplate| Get-SCVirtualHardDisk).directory

if ($path.count -lt 1)

{

Write-Host "Cannot find SCVirtualHardDisk!"

exit 1

}

Write-Host "Get virtualdisk directory $path"

$path2=$path.replace($share, "")

$key="*"+$path2+"*"

Write-Host "Get Key $key"

$lib=($libShare.LibraryServer).FQDN

if ($lib.count -lt 1)

{

Write-Host "Cannot find libraryserver!"

exit 1

}

Write-Host "Get libraryserver $lib"

$partition = Invoke-Command -computername $lib -scriptblock {get-partition } | where-object {$_.accesspaths -like $key}

if (!$partition)

{

Write-Host "Cannot find disk partition!"

exit 1

}

$disk = $partition | Invoke-Command -computername $lib -scriptblock {get-disk}

if (!$disk)

{

Write-Host "Cannot find disk!"

exit 1

}

#offline disk

Write-Host "Offline disk ..."

$disk | Invoke-Command -computername $lib -scriptblock {set-disk -isoffline $true}

Write-Host "Offline disk completed!"

Write-Host "Looking for disk.uniqueid - $disk.uniqueid"

$vdisk = Get-VirtualDisk | where-object {$_.uniqueid -match $disk.uniqueid}

if (!$vdisk)

{

Write-Host "Cannot find virtual disk!"

exit 1

}

$ms = $vdisk | get-maskingset

if (!$ms)

{

Write-Host "Cannot find maskingset!"

exit 1

}

#remove virtual disk from masking set

Write-Host "Call Remove-VirtualDiskFromMaskingSet ..."

Remove-VirtualDiskFromMaskingSet -maskingsetuniqueid $ms.uniqueid -virtualdisknames $vdisk.Name

Write-Host "Call Remove-VirtualDiskFromMaskingSet completed!"

#add virtual disk back in masking set

Write-Host "Call Add-VirtualDiskToMaskingSet ..."

Add-VirtualDiskToMaskingSet -maskingsetuniqueid $ms.uniqueid -virtualdisknames $vdisk.Name -deviceaccesses ReadOnly

Write-Host "Call Add-VirtualDiskToMaskingSet completed!"

#online disk

Write-Host "Online disk ..."

$disk | Invoke-Command -computername $lib -scriptblock {set-disk -isoffline $false}

Write-Host "Online disk completed!”

A new user attribute for Work Folders server Url

$
0
0

Overview

To continue the blog post series on Work Folders, I’d like to talk about how to manage a new attribute storing the Sync Server Url as part of the user object in Active Directory Domain Services (AD DS). If you have seen the demo of the client setup in this video [00:08:20], it shows that a user can simply use their email address to configure Work Folders. The magic here is to use the user email address to construct a Url that a Work Folders client can query from the sync server.

For example, if the user email is Joe@contoso.com, the Work Folders client will build the Url as https://workfolders.contoso.com, and use that as the Url to establish the communication with the Work Folders server. The server then queries AD DS for the new user attribute to figure out the server location of the user. This process is called auto-discovery.

This makes it so that the only data users need to know to set up Work Folders is their email address. Admins can simply configure the user attribute. When a user needs to be moved to another server, the admin can just update the user attribute, and remove user access to the old server. At this point, the user is automatically redirected to the new server. Auto-discovery can make client set up really simple when multiple sync servers are deployed.

Although this attribute makes discovery really simple, it is not a must-have attribute. There are a couple of cases you won’t need to configure this attribute or extend the schema to use it.

  • If you have a single server deployment, the user will get a valid sync share when querying the sync server.
  • If you want to manage the users per sync server. I.e. send a server Url to the user, so the user will connect to the server directly without going through auto-discovery.

I’m not going to get too deep with the discovery process in this post – instead I’ll focus on the management of this new attribute: msDS-SyncServerUrl.

Schema extension

The new attribute is included in AD DS with Windows Server 2012 R2, but you don’t need to upgrade the domain controller to get this. If you are running a previous released version of Windows Server, you can simply run ADprep.exe on any Windows Server 2012 R2 member servers to extend this attribute in the AD schema. For more information about adprep, take a look at this http://technet.microsoft.com/en-us/library/dd464018(v=WS.10).aspx

Delegation

By default, only members of the Domain Admins or the Enterprise Admins group can modify the user attribute. For Work Folders management, the file server admins should have permissions to modify the user attribute.

Let’s take a look at how you could accomplish this.

Using UI

1. In Active Directory Administration Center, create a file server admin security group (for example, fsadmin).

2. Launch the Delegation of Control Wizard by right-clicking the domain root node.

clip_image002

3. Add FSadmins to the wizard:

clip_image004

4. Select the option to create a custom task to delegate:

clip_image006

5. Select the option to delegate control to only user objects:

clip_image008

6. Specify the msDS- SyncServerUrl as the property to be managed by FSAdmins:

clip_image010

7. Complete the wizard.

Using cmdline

DsAcls.exe is a tool you can use to modify the permissions on the user object for a given attribute. To do the same delegation, you can run the following cmd:

DsAcls dc=contoso,dc=com /I:S /G "Contoso\FSAdmin:RPWP;msDS-SyncServerUrl;user"

In the cmd line above, it enables Contoso\FSAdmin group to Read and Write on the user attribute “msDS-SyncServerUrl”. You can change the FSAdmin to the security group name you have created to manage this user attribute.

A few notes

1. Since the ACL will be applied to existing user objects, the operation may take a while if there is a large number of user objects present in the domain.

2. If you have multiple domains in the forest, you need to repeat the steps to delegate the permission for each domain.

3. After the delegation, all members in FSAdmin will be able to modify the mSDS-SyncServerUrl attribute of the (new or existing) user object.

Modify user attribute

Now that the FSAdmins have permissions to modify the msDS-SyncServerUrl, they can change the value of this attribute using a number of ways:

Using the UI

1. Open the ADSI Edit from the Server Manager -> Tools menu.

2. Connect to the Default naming context by right-clicking the ADSI Edit node, and then selecting Connect to…

clip_image012

3. Select the user, right-click the user object, and then click Properties:

clip_image014

4. Navigate to the msDS-SyncServerUrl property, and click Edit:

clip_image016

5. Enter the Url value for this user, click Add, and then OK.

Using Windows PowerShell

You can use Set-ADObject cmdlet to set the user property:

Get-AdUser <username> | Set-ADObject –Replace @{"msDS-SyncServerUrl" = "<Url String>"}

For example, I can use the following cmdlet to set the attribute for user “Sally”:

Get-ADUser Sally | Set-ADObject –Replace @{"msDS-SyncServerUrl"="https://sync1.contoso.com"}

You can also verify the setting by running the following cmdlet:

Get-ADUser Sally | Get-ADObject –Properties "msDS-SyncServerUrl"

Using an Ldf file

If you are familiar with using ldf files to manage AD objects, you can also create the ldf file to do the same as above:

dn: CN=Sally,CN=Users,DC=Contoso,DC=COM

changetype: modify

add: msDS-SyncServerUrl

msDS-SyncServerUrl: https://sync1.contoso.com

-

Troubleshooting

After the Work Folders server authenticates the user, the server will query the user attribute using “local system” account. If the server fails to query AD DS, it will log an event on the server. Below are some troubleshooting tips:

1. Check the network: see if the server has network access to a domain controller (DC).

2. Check the status of AD DS: make sure the AD DS is healthy and that a DC is online.

3. Check whether the file server has read permissions to the attribute.

Checking for server permission

By default, file servers have read permissions to the user attributes in the AD DS. In some deployments, however, IT admins may have explicitly blocked it, and the query will fail. The steps below will show you how to verify whether the server has read permission to this user attribute:

1. Open ADSIEdit

2. Right click on “ADSI Edit”, and select “Connect to…

3. Select “Schema” naming context:

clip_image018

4. Find the ms-DS-SyncServerUrl attribute, and open the properties page:

clip_image020

5. Go to the “Security” page, and click on “Advanced

clip_image022

6. Go to the “Effective Access” page, and click on “Select a user”, make sure Computers is checked below:

clip_image024

7. Enter the sync server name in the object picker, then click on “View effective access” button:

8. Make sure the sync server’s machine account can read the user properties:

clip_image026

On clusters, check the machine account for the cluster nodes - not the cluster VCO, as the access is done using the local system account of the physical machines.

Related links

The following are related resources about Work Folders:

Work Folders Overview

Introduction of Work Folders on Windows Server 2012 R2

Work Folders Test lab deployment

Certificate management for Work Folders

Windows Server 2012 R2 – Resolving Port Conflict with IIS Websites and Work Folders

$
0
0

Hi All, my name is Bill and I work in the Windows Server - File Services group.  One of my main responsibilities is to enhance the Server Manager user interface.  We have just delivered Work Folders as part of the Windows Server 2012 R2 release.  I have been following the forum and there have been several questions about Work Folders and port conflicts with other IIS Websites.  For this reason I posted this blog for guidance.

Covered in this Article:

  • Diagnosing Port Conflict between Work Folders and Windows Server Essentials or other web applications.
  • Changing Port Configuration in Work Folders
  • In-Place Upgrade from Windows Server Essentials to Windows Server Standard
  • Guidance for Using Hyper-V for Current Enabling of Work Folders and Windows Server Essentials using their default configuration

Sections:

  1. PROBLEM STATEMENT
  2. OVERVIEW
  3. DIAGNOSING WORK FOLDERS AND WINDOWS ESSENTIALS PORT CONFLICTS
  4. CHANGING WORK FOLDERS CONFIGURATION
  5. NO CONFIGURATION FOR BOTH FEATURES
  6. SUMMARY 

PROBLEM STATEMENT

Using any web application with Work Folders may create port conflicts between the web application and Work Folders.  Work Folders uses by default ports HTTPS=443 and HTTP=80.  Most web applications use the same well known ports.  In the specific case of Windows Server Essentials and Work Folders, both features use the same default ports.  The first feature to initialize the ports will exclusively own them.  This creates a port conflict for one of the features, depending on startup and how the features where configured. 

OVERVIEW

Work Folders is available in Windows Server 2012 R2 Essentials as part of the File and Storage Services role. Work Folders uses the IIS Hostable Web Core feature and all management is performed via the Work Folders canvas in Server Manager as well as via Windows PowerShell cmdlets.  Windows Server Essentials is managed via its dashboard and the IIS Management UX.  Both products assume exclusive access of the SSL port (443) and HTTP port (80).  This is the default configuration for both products.

The administrator has the ability to change both feature configurations when both products are enabled. Changing the port conflicts allows for both products to be installed on Windows Server 2012 R2 Essentials.  If the administrator does not want to change the default ports, they have the option of enabling either Windows Server Essentials feature or Work Folders.  This is at their discretion based on business need.

 If the administrator would like to change the ports on either feature, they need to open the firewall on the server for the specific ports they defined for the feature.  This can be accomplished by navigating to Control Panel and modifying the Windows Firewall configuration.  Further work is necessary in collaboration with a network administrator to configure the routers as well.  This document will not cover network configuration.

See: http://msdn.microsoft.com/en-us/library/bb909657(v=vs.90).aspx 

DIAGNOSING WORK FOLDERS AND WINDOWS SERVER ESSENTIALS PORT CONFLICTS

In the event where both features are enabled on the same server with default port configuration the behavior may be subtle and only one feature will work.  In the case of Windows Server 2012 R2 Essentials, Windows Server Essentials is enabled out of the box.  This means the ports will have been configured and ownership will be IIS.  When you enable Work Folders, the installation will succeed and Server Manager may not be able to manage the Work Folders feature on the Windows Server Essentials server.  If the administrator navigates to the SERVICES primary tile they will see the following:

 


 
 
The Sync Share Service will not start if both ports defined in its configuration are being used by another process.  This will be a clear indication the default ports are not available to Work Folders.  If on the off chance one of the ports is available the Sync Share Service will become operational.   There will be no indication there is an error.

Please note if port 443 is used by another process, although Work Folders Service will start and be operational, any SSL traffic will not be directed to Work Folders.  SSL=443 is the default secure port used by Work Folders.  The administrator would have to look at the port definition in the file c:\windows\system32\SyncShareSvc.config and compare the configuration of websites defined in the IIS UX.  Once they check the port information in IIS they can assess the conflict. 

Using Event Viewer to view SyncShareSvc errors

In the case both ports are not available the following error can be found in the system event log.

Using Event Viewer (eventvwr.msc) navigate to the Windows Logs, System Channel.  The error should be from the Service Control Manager.  The error returned will be in the system channel in the form:   “The Sync Share Service terminated with the following service-specific error:  Cannot create a file when a file already exists” This is the generic message when both ports are not available.

  

Using IIS PowerShell cmdlets “Get-WebBinding” to list port bindings

Get-WebBinding is a handy command for showing IIS website port bindings on your server.  In this particular case we want to see all the IIS website bindings active on your server.

>get-WebBinding     ß command on left will give you the following output:

Example 1 - both ports in use by IIS website:

The Work Folders SyncShareSvc will not start because both default ports are being used by IIS.

 

Example 2 – one port used by IIS website – SSL PORT:

As mentioned in the previous section, if Work Folders has access to one port the service SyncShareSvc will come up.  Work Folders uses port 443 as the default.  In example 2 Work Folders service would start and look  operational.  The output of Get-WebBinding would show the administrator Work Folders would not function as defined in the default configuration.

If neither port is in use by another web application, the list above would be empty. 

CHANGING WORK FOLDERS CONFIGURATION

On the Server Manager Service Primary Tile locate the SERVICES tile.  Locate the SyncShareSvc.  Verify it is stopped.  If it is not stopped, select the SyncShareSvc and stop it.

Navigate to the directory on the server where work folders feature is enabled.

>cd c:\windows\system32

Edit the file with your favorite editor (file name = SyncShareSvc.config)

Locate the section below and make the changes to your port designation

 

For this
example you want to change SSL Port from 443 to 12345.  Change the port number and close the file.   Because the sync service does not run under the system designation it does not have the privileges to access different ports other than the default. It runs under LOCAL SERVICE.  Because of this designation the administrator has to run another command.   In an elevated command window type the following command:

Netsh http add urlacl url=https://*:12345/ user="NT Authority\LOCAL SERVICE"

 

Navigate to SERVICES tile in Server Manager and start the service SyncShareSvc.

Since the Work Folders configuration on the client defaults to either HTTPS=443 or HTTP=80 there is additional configuration to override the default ports.  The administrator will need to change the URL for connecting to the Windows Server hosting the clients sync share.  Normally all that would be necessary is the URL of the server.  Since the port has changed there is an additional parameter in the URL which is – colon port number “:#”.  This  number matches the configuration in the configuration file on the server SyncShareSvc.config.   See example of the PC client configuration below:

 

  

NOTE: When the administrator changes the default ports for Work Folders they cannot use the auto discovery process.  They can communicate the new URL using Group Policy or a standard email communication with the URL and new port definition.

 

IIS References for Configuration Changes

For Windows Server Essentials port configuration see the Windows Server Essentials documentation using the IIS management UX.

http://www.iis.net/configreference/system.applicationhost/sites/site/bindings/binding

  

NO CONFIGURATION CHANGES FOR BOTH FEATURES

The administrator has another option for running both Windows Server Essentials and Work Folders on the same server.  There are posts on-line which already recommend an in-place license upgrade from Windows Server Essentials to Windows Server Standard.  This has a twofold improvement.  It allows for greater usage of Windows Server Essentials and has a license for two Hyper-V machines.  The administrator would then disable Windows Server Essentials in the main host and user the two Hyper-V machines one for each feature.  Windows Server Essentials in one VM and Work Folders in the other. They can both use their default configurations and work concurrently on the single host.

You can upgrade in place from Windows Server 2012 R2 Essentials to Windows Server Standard.  --- Windows Server Standard is the only in-place upgrade.  You cannot use the command below to upgrade to Windows Server Storage, Windows Server Datacenter etc. The command for upgrading from Windows Server 2012 R2 Essentials to Windows Server 2012 R2 Standard is:

 dism /online /set-edition:ServerStandard /accepteula /productkey:<Product Key>

From <windows2012 essentials upgrade to windows 2012 server standarddataenterprise

SUMMARY

There are several ways to configure Work Folders in an environment which already has established web applications. You have the ability to change the ports of either application.  In the case of an IIS application you can use the existing IIS UX.  In the case of WorkFolders you can follow this guide. The administrator also has the ability to run Work Folders in a separate VM which has the benefit of leaving their current configuration as is and installed Work Folders with default settings.

 

Monitoring Windows Server 2012 R2 Work Folders Deployments.

$
0
0

Overview

Work Folders is a new functionality introduced in Windows Server 2012 R2. It enables Information workers to sync their work files between their devices. This functionality is powered by the Work Folders service that can be enabled as a Windows Server 2012 R2 File Services Role service.
Work Folders relies on a set of core infrastructure services such as storage, file systems, networks and enterprise infrastructure solutions such as Active Directory Domain Services (AD DS), Active Directory Federation services, Web Application Proxy and more. Any problems with those infrastructure components or a misconfiguration of Work Folders might lead eventually to a degraded state or a complete unavailability of the Work Folders service. Not having the Work Folders service running properly might impact the users’ capabilities to sync files between their different devices.
Just like other enterprise solutions, the Work Folders service comes with a set of logging and monitoring tools that allows IT Pros to identify, understand and solve issues or errors that the system is facing.
This blog post will cover several monitoring and logging solutions that facilitate early identification of Work Folders service issues and also help understand the root cause that instigated the problem.

 

Monitoring Work Folders Operational health with Server Manager and Event Viewer

Server Manager

Server Manager in most cases will be the best starting point to understand the health status of the Work Folders service. The File and Storage Services tiles display server level information such as running services status and related events. There is also a specific canvas for Work Folders which provides sync share and Work Folders-specific information.

To see the Work Folders service status and related events, open Server Manager and navigate to the servers’ canvas through “Files and Storage Services” -> “Servers”. The tiles in this canvas (as shown in image 1) display services health and events related to Work Folders and other File and Storage Services.
The Work Folders service name is “Windows Sync Share” and any events related to Work Folders will show as a “Microsoft-Windows-SyncShare” Source (Work Folders events will be explained later in this document)

 


Image 1 - In this example, we can see that the Sync Share service is running properly, but there was a Work Folders error trying to access the file system as shown in the events tile. (This specific issue can be caused by lack of access permissions or physical disk access issues)

 

To view specific sync shares information, in Server Manager, go to “Files and Storage Services” -> “Work Folders”. This Work Folders canvas displays related information such as the file system location of the sync share and the users that are mapped to this sync share. It also provides important information about the volumes and file systems that the sync share resides on. This canvas is a good view to spot storage and file system related issues such as low disk space that might impact the Work Folders directories

 

 
Image 2 – The Work Folders canvas shows sync shares information. In this example, the “HRWorkFolders” which resides on the G:\hrworkfolders share is selected. Once selected, the other tiles on this canvas show additional information for the selected sync share. This includes the list of users that are mapped to that sync share (managed by security groups), the volume information for the sync share, and the quota settings.

 

Selecting sync shares  from the master tile above is expandable to multi sync shares (by holding the CTRL key and selecting more sync shares, or using the CTRL-A key combination to select all shares in the sync shares tile). When multiple sync shares are selected, the related tiles will also transform to a multiple objects view as shown in image 3 below. This multi objects view is useful to get a broader view of the sync shares, the amount of remaining space on their respective volume and any quota thresholds that might be met.

 

 
Image 3 – The primary sync share tile allows multiple sync shares selection. Related tiles react accordingly by showing multiple rows of volume and quota information as well. We can see in this example that the “HRWorkFolders” quota is low on free space and should be extended.

 

 

The tiles described above are useful in displaying the system’s status, but if a lot of volumes and drives are used for Work Folders, information rows that display low quota or low disk space might not stand out. One way to easily spot volumes or quotas which are almost full is to sort the free space and capacity columns to list the ones with the least amount of remaining space up on top (by clicking on the column title). Another way is to use the tiles built in filter boxes. Image 4 below shows the ability to only show Sync Shares hosting volumes which have less than 10GB of available space. Those filters can also be saved for future usage.

 
Image 4 – Volumes tile on the Work Folders canvas set with a filter to only show volumes with less than 10GB of free space.

 

It is also possible from the Work Folders canvas, to drill down even further and get status information for a specific user across his different Work Folders devices. This can be done by selecting the appropriate user from the users’ tile, and selecting the properties context menu item (as seen in image 5 and 6 below). This view provides more information on the users’ devices and can be used to identify specific users’ devices issues. 


Image 5 – Work Folders user context menu

 

The Properties dialog will present information about the users Work Folder location, the devices that run Work Folders, their last sync date and more.

 


Image 6 – Work Folders status for Sally. This dialog displaying sync information of Sally’s different devices.

 

 

Event Viewer

The Work Folders service writes operational information, warning and error events to the Microsoft-Windows-SyncShare/Operational channel. This channel contains informational level events such as creation of a user sync share folder and warnings about the system health. It also logs errors that describe critical issues that needs to be addressed, such as the service not being able to access the file system.

There is also a Microsoft-Windows-SyncShare/Reporting channel that logs successful user sync actions. In this reporting channel, each logged event represents a successful sync action by a device, the size of the sync set, the number of files in the sync set and the device information such as OS version and type. These events can be used to understand the overall health of the system and collected for understanding Work Folders usage trends.

Listing and collecting the Reporting logs be done either through System Reports in Operations Manager , or as an alternative, by running PowerShell scripts that collect the data and export it to a CSV which can then be analyzed in Microsoft Excel. (See an example down below in the PowerShell section)


There are 2 main tools that can be used to read these events.

In Server Manager, by going to the “Files and Storage Services” -> “Servers” and browsing the events tiles (see image 7). Note that this tile displays Work Folders related events and other File and Storage Services events. This tile lists only the operation channel events (reporting channel events are not shown).

 
Image 7 – Work Folders Events are shown in the Files and Storage Services/Servers  canvas

Another way to view the logs is by using the Event Viewer. Event viewer can be opened from different locations, either by typing “eventvwr” in a command or PowerShell console or by using the Tools menu in Server Manager (showing on the upper right corner).
Once Event Viewer is opened, use the tree on the left pane to navigate to “Windows logs” -> “Microsoft” -> “Windows” -> Sync Share (see image 8). Underneath the SyncShare node, you’ll find the operational and reporting channels. Clicking on each one of them will bring up the list of events (see image 9)

 


Image 8 – Work Folders Sync Share events location in event viewer

 

 
Image 9 – Work Folders user events showing in the Event Viewer pane

 

Monitoring Work Folders with PowerShell

The Work Folders Service on Windows Server 2012 R2 comes with a supporting PowerShell module and cmdlets. (For the full list of Work Folders Cmdlets run gcm –m SyncShare in a Powershell console).

Just like in the examples shown above, where Server Manager was used to monitor and extract the information, the Work Folders cmdlets provide a way to retrieve Work Folders sync shares and users information. This can be either used by administrators for interactive monitoring session or for automation within PowerShell scripts.

Here are a few Powershell examples that provides Work Folders sync shares and users status information.

Get-SyncShare  -The Get-SyncShare cmdlet provides information on sync shares. This includes the file system location, the list of security groups and more.


From these objects, Staging folder and Path can be extracted and checked for availability and overall health.

 

 

Get-SyncUserStatus - similar to the users’ property window described above in the server manager section, this cmdlet provides Work Folders users’ information. This includes the user name, the devices that the users are using, last successful connections and more.  Running this cmdlet requires providing the specific user name and sync share.


Here is an example for listing the devices and status that Sally is using with Work Folders:


 In the results shown above, useful user information is shown about the user’s devices, their OS configuration and last successful sync time.

 

Get-Service - The Sync Share service (named SyncShareSVC ) status can be read by using PowerShell’s generic get-service command


 In the above example we can see that the service is in “Running” state. “Stopped” means that the service is not running.

Events – Powershell also provides an easy way of listing Work Folders events, either the operational or the reporting channels. Here are a few examples:

1) Listing Errors from the operational channel (in this example, the issues are reported on a system where one of the disks hosting the Work Folders directory was intentionally yanked out)

2) List successful events from the Work Folders Reporting channel

 

Other Work Folders Monitoring Tools and Solutions

While this post focuses on Work Folders Server Manager tiles and Powershell cmdlets, there are more useful tools that can be used to monitor a Work Folders deployment.

Work Folders Best Practice Analyzer

Windows Server 2012 R2 comes with a built in set of Work Folders BPA rules. Though BPA rules intent is to alert on configuration issues, they can be used to routinely monitor and identify issues that might impact the Work Folders service.

More details on Work Folders BPA rules can be found here

Work Folders System Center Operations Manager File Services Management Pack.

A new File and Storage Services management pack for windows server 2012 R2 should come out shortly after windows server 2012 R2 general availability. This pack will also include Work Folders service monitoring capabilities that can be used with a System Center Operations Manager.

More information on System Center Operations Manager is available here.

Performance monitoring

Work folders didn’t introduce any new performance monitors, however, since the Work Folders service is hosted by a web service, setting performance monitoring on the web service instances can provide valuable information on the clients Work Folders data transfer, queues and more. Furthermore, performance monitors can be also set on Network, CPU and other valuable system components that are essential for the Work Folders Service.

More information on Performance monitors can be found here


Work Folders Supporting Systems monitoring (AD, ADFS, Web Application Proxy and SSL Certificates)

As mentioned above, Work Folders rely on a set of enterprise solutions to work properly. These include, but not limited to, Active directory, Active Directory Federation Service, Web Applications Proxy, Certificate expiration dates and more. Any impact on any one of these services might impact the Work Folders service. To sustain a long running Work Folders service, it is also recommend that any one of the supporting components will also be monitored.

More information on certificate management and monitoring certificate expirations can be found here.

 

Other Work Folders Related Links

 

 
 

Performance Considerations for Work Folders Deployments

$
0
0

Hi all,

Here is another great Work Folders blog post that shares information about work folders performance in large scale deployments. The content below was compiled and written by Sundar Srinivasan who is one of the software engineers in the Work Folders product team.


Overview

One of the exciting new features introduced in Windows Server 2012 R2 is Work Folders. Please refer to this Technet article and this blog post for broader details about the Work Folders feature and its deployment. During the development of this feature, I worked on testing the performance of Work Folders on a typical enterprise-scale deployment. In this blog post, we are going to look at the performance and scalability aspects of Work Folders.

There are three scenarios of sync that we modeled for this experiment. Once a user configures Work Folders on her device, the user tends to move all her work-related files to the Work Folders triggering a sync that populates the data in the Sync Share set up on her organization’s Windows 2012 R2 file server with Work Folders feature enabled. This scenario is termed as “first time sync”.

The second scenario is the user adding new devices. As the user configures Work Folders on her personal devices like laptops, Surface Pro or Surface RT, a sync will be triggered from each of these devices to sync the work-related files to her devices.

Beyond this point, the user changes their Work Folders data like editing their Word documents and creating new PowerPoint files on a daily basis, on one or more devices. Any such change will trigger what we call an “ongoing sync” to the server. For the purpose of measuring the scalability of the file server enabled for synchronizing Work Folders data, which we will refer to as the sync server, we are very much interested in studying how many concurrent ongoing sync session the server can handle without affecting the experience of an individual user.

First section of this blog explains the topology that we used for simulating 5,000 users and a heuristic model that explains how we derive the concurrent sync sessions the sync server serves on average at any given time. Then we will look at the results of our experiments and what resources become bottleneck. This blog will also give pointers to some Windows Performance Counters that can help analyzing the performance of the sync server in production and give some guidance on configurations.

Hardware Topology

We used a mid-level 2-node failover cluster server. The storage for the server is through external JBODs of SAS drives connected through an LSI controller. Our definition of the server hardware looks like this

 

We set up the machine with the data of 5000 users on the sync server distributed across 10 Sync Shares. To simulate multiple devices with Work Folders configured, we use a small set of entry-level servers which will simulate multiple sync sessions from each of them, as if they originate from multiple devices. The server has a 10Gbps connection to the hub, while all the clients have 1Gbps connections.

The hardware topology that we used in this test is as shown in the below diagram:

 

The list of clients on the left-hand side of the diagram represents the entry-level servers which simulate multiple sync session from each of them, as if they originate from multiple devices. We would like to measure the experience of the users, when the server and the infrastructure is loaded with multiple sync sessions. So we use two desktop-class devices and we measure the time taken for the changes on device 1 to reach device 2. The time taken for the data to be synced to the second device should not be fairly different compared to the time taken when the server is free.

Dataset that we used for each user is about 2 GB each in size with 2500 files and 500 folders. Based on our background knowledge on Offline Files feature, the distribution of file sizes on a typical dataset look like this:

 

Although 99% of the sync users fall under this category, we also included some users to test our dataset scale limits. About 1% of the users have a 100GB dataset, with some users having 250,000 files and 500,000 folders and 10,000 files in a single folder, and with some users with files being as large as 10GB.

We tested with 5000 users in this setup across ten Sync Shares. In order to test the scale limit on number of users supported per Sync Share, we created a single Sync Share with 1000 users and distributed the other 4000 users across nine Sync Shares: five Sync Shares with 400 users each and four Sync Shares with 500 users each.

Modeling the sync sessions on server

We have developed a heuristic model to calculate the number of simultaneous sync sessions on the sync server, by looking at the distribution of user profiles. In Work Folders, any file or folder change made locally will trigger a sync. But if there is no change made on a particular device with Work Folders configured, the device still polls the server once in 10 minutes to determine if there is any new file or folder or new version of existing file available on the server for download. So we just need to model the user activity during this 10 minutes polling interval.

In this model, we assume that each user has 3 devices with Work Folders configured - out of them on an average 1.5 devices/user will be active and we assume that on an average 20% of users are offline. The other users are classified based on their usage of Work Folders into 5 profiles - from inactive users whose devices are online but passively receiving changes to hyperactive users who make bulk file copies into the Work Folders:

 

Our model is based on the educated assumptions made in the distribution of the users with our background knowledge of the user data synchronization scenario. We also derive the number of simultaneous sessions on the server based on our empirical knowledge about the duration of a sync session and polling session on the server. We applied this model to experiment if the mid-level server that we described above can support 5000 users without noticeably degrading the sync user-experience of individual users. The results of our experiment is described in the section below.

Results & Analysis

In this section, we will discuss what the results look like and compare it with the results when the server is inactive. We will also show how to identify network bottleneck during synchronization.

To measure the sync time, our reference user falls in the class of a very active user who has made 10 independent file changes and two new files created in a new folder. Our test results of sync time of the reference user is as shown below:

 

The chart above shows the time taken for the local changes to be upload to the server and the remote changes to be downloaded to a different device. To put this in the context of user experience, a common concern in any sync system is a simple question posed by the user: “Will my files be available in all my devices immediately?” From Work Folders point of view, the answer to this question depends on several factors like frequency of synchronization, amount of data transferred to effect that sync, and the speed of the network connection from the device to the server. So the end-user experience of Work Folders is defined in terms of the perceived delay after which a user can expect the data to be available across multiple devices As the number of concurrent sync sessions to the server increases, we expect that the user will not experience a significant delay in changes from one device synchronizing to her other devices.

The following graph shows the time in seconds the reference user has to wait for their changes (10 independent file changes and two new files created in a new folder) to be available across multiple devices.

 

As we notice from the graph above, there is a less than 5% impact on the user experience.

However the previous chart shows that the upload time and download time increases from 2 seconds to 12 seconds, when the sync server is loaded with several sync and polling requests. We proceed forward in studying which resources are becoming a bottleneck on the server to cause this increase in the sync time from 2 seconds to 12 seconds.

The following chart gives the CPU utilization on the server during this process.

 

The CPU utilization remains within 36% with a median value of 28%, so we do not think CPU was becoming a bottleneck at this stage. Memory used by the process was 12GB, but it was primarily due to the database residing in memory for faster access. The server has a 64GB RAM and so it was not becoming a bottleneck. In other testing, we saw that if the operating system was indeed under memory pressure, the memory used by the database for caching would decrease.

The utilization of other network and disks are shown in the table below.

 

From the table above, it is evident that the network is becoming a bottleneck as the number of concurrent sync sessions and polling sessions increase. As already explained in the hardware topology, the client machines have a 1Gbps NIC, whereas the sync server has a 10Gbps NIC hooked up to a 10Gbps port of the same network hub. Since the network usage increases, the clients experience delay in communication. However we noticed that none of the HTTP packets were rejected.

From this experiment, we found that the performance of sync operation on high-scale is IO-bound. It is possible to achieve such a high performance even on an entry-level server with 4GB memory and quad-core single socket CPU, as long as the network has a higher bandwidth and storage sub-system has a higher throughput.

Runtime Performance Monitoring and Best Practices

Although Work Folders does not introduce any new performance counters, existing performance counters of IIS, network and disk in Windows Server 2012 R2 can be used to understand which resource is becoming overused or bottlenecked. We have listed some important performance counters in the table below

 

Work Folders on server is highly network and IO intensive. On sync server, Work Folders runs as a separate request queue named SyncSharePool. The table above contains the counters specific to the SyncSharePool that would be useful. If the rejection rate goes above 0% or if the number of rejected requests shoots up, then the network is clearly getting bottlenecked. Apart from these counters, there are other generic IIS counters and network counters that can give information about the network utilization.

The number of \HTTP Service Url Groups(*)\CurrentConnections gives the existing connections to the server that includes the combined count of both sync sessions and polling sessions. If the network utilization reaches 100%, we will notice the output queue length increasing as the packets get queued up. If the packets dropped greater than zero indicates that the network is getting choked up.

In general, if there are multiple Sync Shares in a single server, it is advisable to configure the Sync Shares in different volumes in different virtual disks if possible. Configuring Sync Shares in different volumes will translate the incoming requests to different Sync Shares as file IOs into multiple virtual disks. We have also listed some counters specific to logical disks that can be used to determine if the incoming requests are targeted towards them. Any time, if the average disk queue length goes above 10, the disk IO latency is going to increase.

Conclusion

When deploying Work Folders across the enterprise, it is a good idea to plan the rollout in phases so that you can limit the number of new users trying to enroll and upload their data all at once causing heavy network traffic. This will ensure that the users will not try to enroll and upload their data all at once causing heavy network traffic.

During this test, we wanted to explore if the mid-level server can support 5000 users. Based on the heuristic model, we classified the users under different profiles and simulated sync sessions accordingly. We monitored the server to collect at the vital statistics to ensure that its continuous stability. We measured the sync time from desktop-class hardware to understand the user-experience when the server is supporting 5000 users. With this experiment, we found out that the mid-level server with exact or similar configuration as mentioned above should be able to support 5000 users without affecting the user-experience. We found out that the network utilization on the server averaged at 60% and the sync slowed down by about 10 seconds due to the load. The total time taken for typical users’ change to appear across all their devices is not affected by more than 5%, even with a busy server with sync data of 5000 users.

Our study of performance counters shows that the performance of sync operation on high-scale is IO-bound. It is possible to achieve such a high performance even on an entry-level server with 4GB memory and quad-core single socket CPU, as long as the network has a higher bandwidth and storage sub-system has a higher IO throughput.

 

We hope you find this information helpful when you plan and deploy Work Folders in your organization.

Sundar Srinivasan

Work Folders on Clusters

$
0
0

In the previous blog post, I provided a step by step guide to setup Work Folders in a lab environment using the Preview release of Windows Server 2012 R2. With the GA release, Work Folders leverages Windows Failover clustering to increase availability. For general details about failover clusters, see Failover Clustering.

In this blog post, I’ll show you how to configure the Work Folders on failover clusters using the latest Windows Server 2012 R2 release.

Overview

Work Folders is a new technology in the File and Storage Services role. Work Folders enables users to sync their work data across their devices using a backend file server so that their data is available to them wherever they are. In the Windows Server 2012 R2 release, Work Folders is supported on traditional failover file server clusters.

Pre-requisite

In this blog post, I’ll provide you with the step by step guide to set up Work Folders in a highly available failover configuration, and discuss the differences in managing a clustered Work Folders server vs. a standalone Work Folders server. This post assumes that you have a good understanding of Work Folders, and know how to configure Work Folders on a standalone server. If not, refer to the Work Folders Overview as well as Designing a Work Folders Implementation and Deploying Work Folders.

To follow this step by step guide, you will need to have the following computers ready:

  • Domain controller: a server with the Active directory Domain Services role enabled, and configured with a domain (for example: Contoso.org)
  • A two node server cluster, joined to the domain (Contoso.org). Each node runs Windows Server 2012 R2 build. (for example: HAWorkFolders)
  • One client computer running Windows 8.1 or Windows RT 8.1

The computers can be physical or VMs. This blog post will not cover the steps to build the two node clusters; to set up a failover cluster, see Create a Failover Cluster.

Express lane

This section provides you a checklist on how to setup sync shares on clusters, detailed procedures are covered in the later sections.

  1. Enable the Work Folders role on both cluster nodes
  2. Create a file server role in the cluster
  3. Configure certificates
  4. Create sync share
  5. Client setup

Enable Work Folders role service

The Work Folders role service needs to be enabled on all the cluster nodes. In this blog posts, as I have a two node cluster, I will enable roles on both nodes, by running the following Windows PowerShell cmdlets on each node:

PS C:\> Add-WindowsFeature FS-SyncShareService

To check if the computer has Work Folders enabled, run

PS C:\> Get-WindowsFeature FS-SyncShareService

Note: If Work Folders is installed, FS-SyncShareService will be marked with X.

Creating a clustered file server

Highly available services are created by using the High Availability Wizard in Failover Cluster Manager. Since Work Folders is part of the File and Storage Services Server role and you have already enabled Work Folders on each node, you can simply create the file server in the cluster, and configure sync shares on the highly available file server. The clustered file server will also be referred as a clustered file server instance, which is often called a cluster name account, virtual computer object, or (VCO).

Note: Work Folders isn’t supported on Scale-Out File Servers.

To create a clustered file server:

1. Open Failover Cluster Manager

2. Right click Roles, and then click Configure Roles…

clip_image001

3. Click File Server.

clip_image003

4. Choose the “File Server for general use” option:

clip_image005

5. Enter a name for the clustered file server instance (VCO). This will be the sync server name used by Work Folders client computers and devices during sync.

clip_image007

6. Select the disk on which users’ data will be stored.

clip_image009

7. Click on the confirmation to proceed on the role creation.

clip_image011

8. After you’re finished, the sync server appears in Failover Cluster Manager.

clip_image013

Configure certificates

For general certificate management, see this post. This section will focus on certificate configurations in a cluster using self-signed certificate. Note: the certificate must be configured on each of the nodes.

1. Create a self-signed certificate on one of the cluster nodes by using Windows PowerShell:

PS C:\> New-SelfSignedCertificate –DnsName “SyncServer.Contoso.org”,”WorkFolders.Contoso.org” –CertStoreLocation cert:\Localmachine\My

Note: the DNS name should be the name you used for the clustered file server instance (VCO) for the sync server. For multiple VCOs, the certificate must include all of the VCO names.

2. Export the certificate to a file

To export the certificate with password, you can either use certmgr.msc, or Windows PowerShell. I’ll show you the cmdlet version referenced in this page:

$cert = Get-ChildItem –Path Cert:\LocalMachine\My\<thumbprint>

$type = [System.Security.Cryptography.X509Certificates.X509ContentType]::pfx

$pass = read-host "Password:" -assecurestring

$bytes = $cert.export($type, $pass)

[System.IO.File]::WriteAllBytes("ServerCert.pfx", $bytes)

3. Copy, then import the certificate on the other node

$pass = read-host “Password:” -AsSecureString

Import-pfxCertificate –FilePath servercert.pfx –CertStoreLocation cert:\LocalMachine\My –Password $pass

4. Configure SSL binding on both cluster nodes

netsh http add sslcert ipport=0.0.0.0:443 certhash=<Cert thumbprint> appid={CE66697B-3AA0-49D1-BDBD-A25C8359FD5D} certstorename=MY

Note: you can get the certificate thumbprint by running:

PS C:\>Get-ChildItem –Path cert:\LocalMachine\My

5. Copy, then import the certificate on the client computer. Note: this step is only necessary for self-signed certificate, where client computers don’t trust the certificate by default. Skip this step if you are using trusted certificate.

$pass = read-host “Password:” -AsSecureString

Import-pfxCertificate –FilePath servercert.pfx –CertStoreLocation cert:\LocalMachine\Root –Password $pass

Create a sync share

In a cluster, the sync share is created on the node that owns the disk resource. In other words

1. You cannot create sync shares on disks that are not shared across all the nodes.

2. You cannot create sync shares on the non-owner node of the clustered file server instance (VCO).

For example, I want to setup a sync share on disk E, which is owned by SyncNode1, I will run the following cmdlet on SyncNode1:

New-SyncShare HAShare -path E:\hashare -user Contoso\fin

Note: For more information about creating sync shares, see Deploying Work Folders.

Client setup

  • Make sure that the client can trust the certificate used on the server. If you are using the self-signed certificate, you need to import it on the client. (see step 5 in Configure certificates)
  • When the client connects to the sync servers, it connects to the clustered file server instance (e.g. https://SyncServer.Contoso.org). The client should not connect directly to any physical nodes.
  • The Work Folders setup experience for users is the same as if Work Folders was hosted on a single server, as long as the administrator has set up Work Folders properly with a publicly accessible URL pointing to the VCO name.
  • Sync happens in the background, and there is no difference between standalone or clustered Work Folder servers for ongoing sync, although users can’t sync while the server is in the process of failing over - sync continues after the failover has completed.

Managing clustered sync servers

Most of the management experiences are the same between standalone and clustered sync servers. In this section, I’d like to list out a few differences:

  • All the nodes in the cluster need to have “Work Folders” enabled.
  • When there are more than one clustered sync servers hosted by a single cluster, you can manage sync shares with the “scope” parameter in the cmdlet. For example, Get-SyncShare –Scope <sync server VCO> will limit the sync shares configured on a specific clustered sync server (VCO). You need to run the cmdlet on the cluster node which is hosting the VCO.
  • Using Set-SyncServerSetting on a standalone server applies the settings on a single sync server; using the cmdlet on a clustered sync server applies the settings to all the clustered sync server (VCO) in the cluster.
  • When managing sync shares (such as new, set, remove), you must run the cmdlet or wizard on the node that owns the disk resources that are hosting the sync share data.
  • The SSL certificate is managed on each node in the cluster. The certificate must be installed and configured with SSL binding on each node; certificate renewal is also per node.
  • The certificate CN must contain the sync server (VCO) name, not the cluster node names.
  • When the client connects to the server, it connects using the sync server (VCO) name.

Conclusion

Work Folders supports failover clustering configuration in Windows Server 2012 R2. Most of the management experience is similar to the standalone servers, and the management should be scoped to the clustered sync server (VCO). Certificate management however is per-node.

I hope this blog post helps you to understand how to set up a highly available Work Folders server and provides you with insight into the differences in management between a standalone and highly available configuration. If you have questions not covered here, please raise it in the comments so that I can address it with upcoming postings.


Using DFS Replication Clone Feature to prepare 100TB of data in 3 days (A Test Perspective)

$
0
0

I’m Tsan Zheng, a Senior Test Lead on the DFS team. If you’ve used DFSR (DFS Replication), you’re probably aware that the largest amount of data that we had tested replication with until recently was 10 TB. A few years ago, that was a lot of data, but now, not so much.

In this post, I’m going to talk about how we verified preparing 100 TB of data for replication in 3 days. With Windows Server 2012 R2, we introduced the ability to export a clone of the DFSR database, which dramatically reduces the amount of time used to get preseeded data ready for replication. Now, it only takes roughly 3 days to get 100 TB of data ready for replication with Windows Server 2012 R2. On Windows Server 2012, we think this would’ve taken more than 300 days based on our testing of 100 GB of data, which took 8 hours to prep on Window Server 2012 (we decided not to wait around for 300 days). In this blog post, we’ll show you how we tested the replication of 100 TB of data on Windows Server 2012 R2.

First of all, let’s all look at what 100 TB of data could mean: It could be around 340,000 8 megapixel pictures (that’s 10 years of pictures if you take 100 pictures every day), or 3,400 Blu-Ray quality full-length movies, or billions of office documents, or 5,000 decent sized Exchange mailbox files, or 2,000 decent virtual machine files. That’s a lot of data even in the year of 2013. If you’re using 2 TB hard drives, you need at least 120 of them just to set up two servers to handle this amount of data. Now we have to clarify here that the absolute performance of cloning a DFSR dataset is largely dependent on the number of files and directories, not the actual size of the files (if we use verification level 0 or 1, which don’t involve verifying full file hashes).

In designing the test, we not only need to make sure we set up things correctly, but also we need to make sure that replication happens as expected after the initial preparation of the dataset - you don’t want data corruption when replication is being set up! Preparing the data for replication also must go fast if we’re going to prep a 100 TB of data in a reasonable amount of time.

Now let’s look at our test setup. As mentioned earlier, you need some storage. We deployed two virtual machines, each with 8GB RAM and data volumes using a Storage Spaces simple space (in a production environment you’d probably want to use a mirror space for resiliency). The data volumes were served by a single-node scale-out file server, which provided continuous availability. Hyper-V host (Fujitsu PRIMERGY CX250, 2.5Ghz, 6cores, 128GB RAM) and file server (HP Mach1 Server – 24GB, Xeon 2.27GHz - 8 Core) were connected using dual-10GbE network to ensure near local performance IO-wise. We used 120 drives (2TB each) in 2 Raid Inc JBODs for the file server.

In order to get several performance data points from a DFSR perspective (as DFSR uses one database per volume), we used following volume sizes that total 100 TB on both ends. We used a synthetic file generator to create ~92 TB of unique data; the remaining 8 TB was human-generated data harvested from internal file sets. It’s difficult to have that much real data...not counting VHDx files and peeking into personal archives, of course! We used the robocopy commands provided by DFSR cloning to pre-seed the second member.

Volume

Size

Number of files

Number of folders

Number of Replicated Folders

F

64 TB

68,296,288

2,686,455

1

G

18 TB

21,467,280

70,400

18

H

10 TB

14,510,974

39,122

10

I

7 TB

1,141,246

31,134

7

J

1 TB

1,877,651

7,448

1

TOTAL

100 TB

107,293,439

2,834,559

 

In a nutshell, following diagram shows the test topology used.

image

Now that storage and file sets are ready, let’s look at what verification we did during Export -> Pre-seed -> Import sequence.

  • No errors in the DFSR event log. (From Event Viewer)
  • No skipping or invalid records in DFSR debug log (By checking “[ERROR]”)
  • Replication works fine after cloning, by probing each replicated folder with canary files to check convergence.
  • No mismatched records after cloning, by checking DFSR debug log and DFSR event log.
  • Time taken for cloning was measured using Windows PowerShell cmdlet measure-command:
    • Measure-Command { Export-DfsrClone…}
    • Measure-Command { Import-DfsrClone…}

Following table and graphs summarize the results one of our testers Jialin Le took on a build that was very close to the RTM build of Windows Server 2012 R2. Given the nature of DFSR clone verification levels, it’s not recommended to use validation level 2 (which involves full file hash and is too time consuming for large dataset like this one!)

Note, the performance for level 0 and level 1 validation is largely dependent on count of files and directories rather than absolute file size, it explains why it takes proportionally more time for 64TB volume to export compared that of 18TB as the former has proportionally more folders.

image

Validation Level

Volume Size

Time used to Export (minutes)

Time used to Import(minutes)

0 – None

64 TB

394

2129

18 TB

111

1229

10 TB

73

368

7 TB

70

253

1 TB

11

17

Sum(100TB)

659 (0.4 days)

3996 (2.8 days)

1 – Basic

64 TB

1043

2701

18 TB

211

1840

10 TB

168

577

7 TB

203

442

1 TB

17

37

Sum(100TB)

1642 (1.1 days)

5597 (3.8 days)

From the chart above, you can see getting DFSR ready for replication for large dataset (totaling 100TB) is getting more practical!

I hope you have enjoyed learning more about how we test DFSR features here at Microsoft. For more info on cloning, see our TechNet walkthrough article.

- Tsan Zheng

Understanding Virtual Disks in iSCSI Target Server

$
0
0

Context

In Windows Server 2012 R2, iSCSI Target Server made significant improvements in its persistence layer by providing larger, resilient, dynamically-growing SCSI Logical Units (LUs) that are based on the VHDX format. For more details on what is new in this release, see my previous blog post iSCSI Target Server in Windows Server 2012 R2. In this blog post, I will focus on what has changed in the persistence layer and what that means to you as the administrator as you migrate up to Windows Server 2012 R2. This also answers the related questions that I frequently hear.

 

First, Some Background

iSCSI Target Server stores each SCSI Logical Unit (LU) as a file in the Windows native file system – either NTFS or ReFS. The iSCSI stack uses the VHD (virtual hard disk) format for these files, which is why we call them ‘iSCSI Virtual Disks’. The VHD format provides good portability across target servers, and across the Hyper-V and iSCSI stacks. In Windows Server 2012, the iSCSI stack was well aware of the VHD file formats as it directly creates and manipulates the VHD files. Although this worked out well in some scenarios, this situation was not ideal for iSCSI stack as we really wanted to take immediate advantage of significant improvements in Hyper-V stack’s VHD capabilities – especially the VHDX format, which was newly introduced in Windows Server 2012.

Therefore, in Windows Server 2012 R2, the iSCSI stack switched to using the native Hyper-V VHD APIs. This update instantly gave the iSCSI stack the ability to support iSCSI Virtual Disks based on VHDX format and dynamic VHDX/VHD formats. In this release, the iSCSI stack also switched to VHDX as the default file format for Virtual Disks, so VHDX is the only format that you can use to create new iSCSI Virtual Disks. The iSCSI stack also takes advantage of the new online shrink/expand functionality in the Hyper-V stack, although you should note that the online shrink is new only for VHDX in Windows Server 2012 R2. Finally, we have made a number of corresponding improvements in the File and Storage Services UI in Server Manager to support these core functional changes.

The following tables summarize all of the Virtual Disk compatibility information for Windows Server 2012 and Windows Server 2012 R2. Although the tables may look daunting on a first glance, they are fairly easy to understand with this background information.

 

iSCSI Virtual Disks & Compatibility in Windows Server 2012

 

 

Create?

Import?

Expand?

Shrink?

Fixed VHD

Yes

Yes

Yes

No

Fixed VHDX

No

No

No

No

Diff VHD

Yes

Yes

Yes

No

Diff VHDX

No

No

No

No

Dynamic VHD

No

No

No

No

Dynamic VHDX

No

No

No

No

 

iSCSI Virtual Disks & Compatibility in Windows Server 2012 R2

 

Create?

Import?

Expand?

Shrink?

Fixed VHD

No

Yes

Offline

No

Fixed VHDX

Yes

Yes

Yes

Yes

Diff VHD

No

Yes

Offline

No

Diff VHDX

Yes

Yes

Yes

Yes

Dynamic VHD

No

Yes

Offline

No

Dynamic VHDX

Yes

Yes

Yes

Yes

 

Thin Provisioning

This is a frequent question that I get – “Does iSCSI Target Server support thin provisioning?” Yes we do support thin provisioning, but not in the precise T10 sense because we do not support the UNMAP SCSI command. Dynamic VHDX-based iSCSI Virtual Disks now supported in Windows Server 2012 R2 are in reality thin-provisioned – which is why Dynamic is the default disk type iSCSI target cmdlets use to provision.

 

Wrap-up

So here is the short summary:

  • Important new functionality only available in Windows Server 2012 R2 includes: iSCSI Virtual Disks for all types of VHDX, all types of dynamic disks, and support for online shrink. On the flip side, creating a new VHD-formatted iSCSI Virtual Disk is no longer supported in Windows Server 2012 R2, although importing one works just fine.
  • Expand/Shrink functionality for iSCSI Virtual Disks may be supported in offline mode (“Offline” in tables) or supported in online mode (“Yes” in tables), or not supported at all (“No” in tables). When an operation is supported in online mode, there is no need to take the Virtual Disk offline to perform the operation. Even when that is the case:
    • With Expand operation, make sure to also increase the partition size (or create a new partition) on the initiator to utilize the additional capacity.
    • With Shrink operation, you can only shrink the unallocated part of the capacity, i.e. make sure you are not trying to shrink the Virtual Disk to a size smaller than the end of the last partition – if you do, it will fail. Check out the Resize-IscsiVirtualDisk cmdlet for the complete guidance.

I hope this blog post serves as a useful reference for you. If you have feedback, drop me a comment on this blog post. Let me also know how the new VHDX-based iSCSI Virtual Disks are working for you so far on Windows Server 2012 R2!

To scale out or not to scale out, that is the question

$
0
0

Hi folks, Ned here again. If Shakespeare had run Windows, Hamlet would be a play about configuring failover clusters for storage. Today I discuss the Scale-Out File Server configuration and why general use file server clustered shares may still have a place in your environment. These two types of Windows file servers have different goals and capabilities that come with trade-offs; it’s critical to understand them when designing a solution for you or your customer. We’ve not always done a great job explaining the differences between these two file servers, and this post aims to clear things up.

It's not enough to speak, but to speak true. So let’s get crackin’.

The taming of the SOFS

We released Scale-Out File Server (SOFS) in Windows Server 2012. SOFS adds highly available, active-active, file data access for application data workloads to Windows Server clusters through SMB, Continuous Availability (CA), and Cluster Shared Volumes (CSV). CA file shares ensure - amongst other things - that when you connect through SMB 3, the server synchronously writes through to the disk for data integrity in the event of a node failure. In other words, it makes sure that your files are consistent and safe even if the power goes out.

You get the option when you configure the File Server role in a Windows Server failover cluster:

image

SOFS has some other key benefits:

  • Simultaneous active shares– This is the actual “scale-out” part of SOFS. Every cluster node makes the CA shares accessible simultaneously, rather than one node hosting a share actively and the others available to host it after a failover. The nodes then redirect the user to the underlying data on the storage-owning node with the help of Cluster Shared Volumes (CSV). Even better, no matter which node’s share you connect to, if that node later crashes, you automatically connect to another node right away through SMB Transparent Failover. This requires shares configured with the CA attribute, which are set by default on clustered shares during creation.
  • More bandwidth– Since all the servers host all the shares, all of the nodes’ aggregate network throughput becomes available - although naturally, the underlying storage is still potentially limiting. Adding more nodes adds more bandwidth. SMB Multichannel and SMB Direct (aka Remote Direct Memory Access) take this further by ensuring that the client-server conversations efficiently utilize the available network throughput and minimize the CPU usage on the nodes.
  • Simpler management - SOFS streamlines share management now that the active-passive shares and physical disk resource issues are under the scale-out umbrella; with active-active nodes, you don’t need to manage balancing shares onto each node so that all servers have work to do. Server Manager and the SmbShare Windows PowerShell module also unify the command-line experience and expose useful functionality in a straightforward fashion.

image

Claus Joergensen has a good blog post on these capabilities, and if you want to give to set up a test environment, Jose Barreto is the man with the step-by-step plans.

All of this has a single customer in mind: application data accessed via SMB, like Hyper-V virtual machine disks and SQL database files. With your hypervisor running on one cluster and your storage on another cluster, you can manage – and scale - each aspect of the stack as separate high-performance entities, without the need for expensive SAN fabrics. Truly awesome stuff that sounds like a great fit for any high availability scenarios.

image

For those who want to use SOFS for regular user shares, though: proceed with caution.

Enter Information Worker, holding a laptop

Inside Microsoft, we define “Information Worker” as the standard business user scenario. In other words, a person sitting at their physical client or virtual desktop session, and connecting to file servers to access unstructured data. This means SMB shares filled with home folders, roaming user profiles, redirected folders, departmental data, and common shared data; decades of documents, spreadsheets, and PDFs.

image
Ooh, there’s leftover cake in the break room!

Performance

The typical file operations from IW users are very different when compared to application data like Hyper-V or SQL. IW workloads are metadata heavy (operations like opening files, closing files, creating new files, or renaming existing files). IW operations also involve a great many files, with plenty of copies and deletes, and of course, tons of editing. Even though individual users aren’t doing much, file servers have many users. These operations may involve masses of opens, writes, and closes, and often on files without pre-allocated space. This can mean frequent VDL extension, which means many trips to the disk and back, all over SMB.

Right away, you can see that going through a share enabled with CA to provide the data integrity guarantee might have an impact on performance, when compared to previous releases of Windows Server, which did not have shares with CA and thus did not provide this data integrity guarantee. Continuous Availability requires that data write-through to the disk to ensure integrity in the event of a node failure in SOFS, so everything is synchronous and any buffering only helps on subsequent reads, not writes. A user that needs to copy many big files to a file server - such as by adding them to a redirected My Documents folder - can see significantly slower performance on CA shares. A user that spent a week working from home and returns with their offline files cache brimming will see slower uploads to CA shares.

Nothing is broken here – this is just a consequence of how IW workloads operate. A big VHDX or SQL database file also sees slower creation time through a CA share, but it’s largely a cost paid once, because the files have plenty of pre-allocated space to use up, and subsequent IO prices are comparatively much lower. We also optimize SMB for them, such as with SMB Direct’s handling of 8K IOs.

To demonstrate this, I performed a few small-scale tests in my gross test environment. Don’t worry too much about the raw numbers; just focus on the relative performance differences.

Environment:

  • Single-node Windows Server 2012 R2 RTM cluster with CSV on Storage Spaces with 1-column mirrors and two scale-out SMB shares (one with CA enabled and one without)
  • One Windows Server 2012 R2 RTM client and one Windows 8.1 RTM client
  • A single DC in its own forest
  • All of the above virtualized in Hyper-v
  • Each test repeated many times to ensure a reasonably reliable average.
  • I used Windows PowerShell’s Measure-Command cmdlet for timing in the first three test types and event logging for the redirected folders test.

Note: to set this up, see Jose’s demo here. My only big change was to use one node instead of three, so I had more resources in my very gross test environment.

Methodologies:

  • Internal MS test tool that generates a synthetic 1GB file with random contents.
  • Robocopy of a 1GB file with random contents (using no optional parameters).
  • Windows PowerShell Copy-Item cmdlet copy of a real-life sample user data set comprising 1,238 Files in 96 Folders for 2GB total (using –force –recurse).
  • Sync of a redirected My Documents shell folder comprising 4,609 Files in 143 Folders for 5GB total, calculating the time between Folder Redirection operational event log events 1006 and 1001.

Results:

Test method

CA, avg sec

Non-CA, avg sec

Non-CA to CA IW perf comparison

MS Internal synthetic file creation (1GB)

59

40

1.475 X faster

Robocopy.exe (1GB)

58

42

1.38 X faster

Copy-Item cmdlet (2GB)

107

73

1.465 X faster

Folder Redirection full sync (5GB)

689

545

1.26 X faster

Important: again, this could be faster in absolute terms on your systems with similar data, as my test system is very gross. It could also be slower if your server is quite busy, has crusty drivers installed, is on a choked-out network, etc.

The good news

MS Office 2013’s big three – Word, Excel, and PowerPoint – performed well with both CA and non-CA shares and don’t have notable performance differences in my tests even when editing and saving individual files that were hundreds of MB in size. This is because later versions of Office operate very asynchronously, using local temporary files rather than forcing the user to wait on remote servers. On a remote 210MB PPTX, the save times on an edited file were nearly identical, so I didn’t bother posting any results.

The not-so-good news

Office’s good performance is less likely in other user applications; MS Office has been at this game for 22 years. One internal test application I used to generate files had non-CA performance similar to the synthetic file creation test above. However, when the same tool ran against a CA share, it was 8.6 times slower, because of how it continuously asked the server to allocate more space for the file and kept paying the synchronous write-through cost. There’s no way to know what the more “write-through inefficient” apps are until you find out in testing.

Important: even general-purpose file server clusters have CA set on their shares by default when created via the cluster admin tool, Server Manager, or New-SmbShare. You should consider removing that setting if you require performance over data write-through integrity on shares on clusters. On non-clustered file servers, you cannot enable CA.

This is conceivably useful even with SOFS and application data workloads: for instance, you could create two shares to the same folder. One is for Hyper-V to mount VHDXs remotely, and one is to copy VHDXs to that share when configuring new VMs, such as through SCVMM.

Final important note: make sure you install (at a minimum) KB2883200 on your Windows Server 2012 R2 servers and Windows 8.1 clients; it makes copying to shares a little faster. Better yet, stay up to date on your file server by using this list of currently available hotfixes for the File Services technologies in Windows Server 2012 and in Windows Server 2012 R2

Capabilities

The performance issues are actually manageable; many users probably won’t notice any write-through impact, depending on their work patterns. The real issue here is that Scale-Out requires CSV. Moreover, this paints your environment into a corner, because many IW applications do not support that file system.

At first, you configure files on a scale-out cluster share and it works fine. Nevertheless, a year later, when you decide you need more file server capabilities like Work Folders, Dynamic Access Control, File Classification Infrastructure, and FSRM file quotas and screens – you are blocked.

Let’s go to the big board.

Technology Area

Feature

General Use File Server Cluster

Scale-Out File Server

SMB

SMB Continuous Availability

Yes

Yes

SMB Multichannel

Yes

Yes

SMB Direct

Yes

Yes

SMB Encryption

Yes

Yes

SMB Transparent failover

Yes1

Yes

File System

NTFS

Yes

NA

Resilient File System (ReFS)

Yes

NA

Cluster Shared Volume File System (CSV)

NA

Yes

File Management

 

 

BranchCache

Yes

No4

Data Deduplication (Windows Server 2012)

Yes

No4

Data Deduplication (Windows Server 2012 R2)

Yes

Yes

DFS Namespace (DFSN) root server root

Yes

No4

DFS Namespace (DFSN) folder target server

Yes

Yes

DFS Replication (DFSR)

Yes

No4

File Server Resource Manager (Screens and Quotas)

Yes

No4

File Classification Infrastructure

Yes

No4

Dynamic Access Control (claim-based access, CAP)

Yes

No4

Folder Redirection

Yes

Yes2

Offline Files (client side caching)

Yes

Yes2

Roaming User Profiles

Yes

Yes2

Home Directories

Yes

Yes2

Work Folders

Yes

No4

NFS

NFS Server

Yes

No4

Applications

Hyper-V

Yes3

Yes

Microsoft SQL Server

Yes3

Yes

1 Only works if CA is enabled on shares

2 Not recommended on Scale-Out File Servers.

3 Not recommended on general use file servers.

4 Requires NTFS

Ultimately, this means that if you, your boss, or your customer decides “after that recent audit, we need to use DAC+FCI for more manageable security and we definitely need to screen out MP3 files and Grumpy Cat meme pics”, you will be forced to recreate the entire configuration using NTFS and general use file server clusters. This does not sound pleasant, especially when you now have to shift around terabytes of data.

image

Moreover, let’s not forget about down-level clients like Windows 7; any CA shares require SMB 3.0 or later and older clients connecting to them cannot use SOFS features. While a Windows 7 or Vista client can connect to a CA share, you need Windows 8 or later to use the CA feature.

As for XP? It cannot connect to a CA share at all. This doesn’t matter though, because you already got rid of XP. Right?

The wheel is come full circle

Finally, though, is the big question: if you accept the performance overhead, what does continuous availability provided by SOFS buy you with IW workloads?

The answer: little.

Many end-user applications don’t need the guarantees of continuous availability that SQL and Hyper-V demand in their workload. Your IW applications like Office and Windows Explorer are often quite resistant to the short-term server outages during traditional cluster failover. MS Office especially – it has lived for years in a world of unreliable networking; it uses temp files, works offline, and retries constantly without telling the user if there are intermittent problems contacting a file on a share.

The bottom line is that Word and all its friends will be just fine using traditional general use shares on clusters. Make sure that before you go down the scale-out route in a particular cluster design, it’s the right approach for the task.

image

If you caught all the pseudo-Shakespeare references in this article, post the count in the commons and win a fabulous No-Prize!

Until next time,

- Ned “Exit, pursued by a bear” Pyle

Work Folders interoperability with other file server technologies

$
0
0

A couple of weeks ago, I delivered a presentation on Work Folders deployments, and there was a slide on how Work Folders interoperates with other file server technologies. It occurred to me that it is worth writing a blog post about it.

File classification infrastructure (FCI)

File Classification Infrastructure was released in Windows Server 2008 R2 as part of File Server Resource Manager (FSRM). It enables organizations to classify their files (assign properties to files) and then use Windows mechanisms as well as partner solutions to apply actions based on the file classification. The classification runs on the server. When a file is created/modified on the device and syncs to the server, the classification engine detects the new file or file change, and classifies the file according to the rules. This allows the admin to continue managing the files when they are accessed through Work Folders.

RMS encryption

RMS encryption is typically deployed with FCI on file servers to protect sensitive data. Files are classified according to the classification rules and based on the classification result, and gets RMS encrypted if the file meets the appropriate policy. The encryption is considered a server change, so when the Work Folders client polls for changes, the encrypted file is downloaded to the clients.

The end user doesn’t need to do anything special to encrypt the data - RMS gives the admin the control to secure the data. The next time the user opens the file, they will notice the file is now RMS protected. To open this RMS protected file, the user might have to provide credentials if the device isn’t connected to the corporate network.

Quotas and File screen

Similarly, administrators can configure quotas and file screens for the Work Folders sync shares, which are enforced on the server. For example, if the user puts a file in Work Folders that isn’t allowed on the server, the file fails to upload due to the file screen policy. The user sees a file upload error in the Work Folders Control Panel due to file screen policy on the server. Similarly, if the user exceeds the quota on the server, additional uploads fail until they reduce the size of their Work Folders to below the quota limit. Users see the allowed quota size in the Work Folders Control Panel app on Windows.

DFS Replication

DFS Replication (DFSR) is commonly used on file servers to replicate files to different servers/locations. When planning for Work Folders data replication, it is important to know

  1. The data to be replicated
  2. The replication direction.

There are 3 types of data on the file server associated with Work Folders:

  • The actual user data, which can be replicated for backup purposes (Note that user should sync with only a single Work Folders server)
  • The Work Folders sync database, which tracks the file versions for sync purposes. The database path is located on the same volume as the sync share, under VolumeDrive:\SyncShareState\<SyncShareName>\Metadata
  • Staging folder, which is under VolumeDrive:\SyncShareState\.

The database tracks the file version on the server, and it is used for comparisons when the client device checks with the server. The database is tied to a dataset on that server, and should not be copied to other servers. In the case where the primary file server is down, and the admin wants to enable sync using the replica server, they can do so by re-creating the matching sync shares. The sync service on the new server will create a new database for the given dataset.

The staging folder serves as a temporary location for uploaded files. The files are uploaded to the staging folder first, and once a file completes the upload, it will be moved to the actual user folder on the server. This folder doesn’t need to be replicated.

In short, the SyncShareState folder should NOT be replicated.

As for replication direction, we only support the one-way replication (read-only replication) from the sync server to other DFS-R servers. Because we can't track file changes through replication outside of the sync engine very well, so we want to avoid running into the bad situations.

Failover Clustering

You can set up Work Folders on a traditional failover cluster. There is no difference from the user experience perspective, though there are a few differences in how you administer the server. For more information, see this clustering managementblog post.

SMB

I think of SMB and Work Folders as two different access protocols to a dataset on the file server. They are almost orthogonal to each other. I say “almost”, because there is an interoperability aspect between the two.

File changes through Work Folders are tracked by a sync service, so all changes that originate on a client device are known to Work Folders. However, if the file changes by other means – for example, by access through SMB, Work Folders doesn’t intrinsically recognize the change. To catch these changes, we have a process to enumerate the files periodically to detect changes.

File enumeration does have a performance impact on the server. By default, enumeration takes place every 5 minutes. This means that when the client asks the server for changes, if the server sees that the user’s files haven’t gone through enumeration in the last 5 minutes, it starts the file enumeration process on that user’s dataset. The enumeration setting can be changed using the following cmdlet:

Set-SyncServerSetting–MinimumChangeDetectionMins”.

The setting impacts on how quickly changes made outside of Work Folders get synced to all the client devices. So the admin needs to evaluate the tradeoff and set it appropriately.

Note1: Changes made outside of Work Folders can be made by users or by applications. For example, RMS encryption is considered outside of Work Folders, and so these changes get picked up during the server side change detection process.

Note2: SMB is just one protocol example of allowing for remote access, NFS and WebDAV behave similarly with Work Folders.

Dynamic Access Control (DAC)

There are a couple of areas relate to DAC interoperability with Work Folders: discovery and data access.

Discovery happens during the device setup of Work Folders. The client will query the server for sync locations on the server. On the server, each sync share is associated with one or more security groups - the server uses the user security group membership to find the correct sync share for a given user. During discovery, it will not use DAC.

Once the Work Folders configured on a device, it goes to the ongoing sync phase. For each sync session, after the user authenticates with the server, the server access the data in that user’s context, and all the NTFS file/folder permission are honored, including DAC.

Looking at this from a different angle, DAC is more relevant for team sharing, whereas Work Folders is targeted to individual user’s folder (e.g. home folders), so I don’t see the two features likely to operate on the same dataset.

Ending notes

The areas I covered above are the ones I chose to highlight and by no means to constitute the full list of interoperability for Work Folders, think of it as a starting point. Feel free to comment about other areas that you are interested in the comments.

Deploying Windows Server 2012 R2 Work Folders in a Virtual Machine in Windows Azure

$
0
0

Hi Folks,

Here is another great blog post by Siddhartha Singh , who is one of the senior test leads in the Work Folders team. He has documented the steps required to deploy Work Folders on an Azure VM (IasS). While the process is similar to an on-premise deployment , there are some aspects that need to be taken care of which Sid describes below.

========================================================================================================================= 

Overview

This blog post discusses how to set up Work Folders on a virtual machine (VM) in Windows Azure. Work Folders is a new technology in Windows Server 2012 R2 that provides a consistent way for users to access their work files from their PCs and devices. This functionality is powered by the Work Folders service, which is part of the File and Storage Services role.

Using Windows Azure virtual machines allows you to provision infrastructure with a pay-as-you-go model, and benefit from the enterprise grade support and availability. With Windows Azure VMs, you can start with a limited deployment of Work Folders in your enterprise, and then easily scale out to more users as required.

You will find detailed information about Windows Azure virtual machines here:

http://www.windowsazure.com/en-us/services/virtual-machines/

 

Create a virtual machine for Work Folders

You can create a virtual machine from the Windows Azure Portal (http://www.windowsazure.com/en-us/manage/windows/tutorials/virtual-machine-from-gallery/) by selecting Windows Server 2012 R2.

Establish connectivity to your on-premises network

Once you have a Windows Azure virtual machine running Windows Server 2012 R2 set up for Work Folders, you need to connect the server to your organization’s Active Directory domain so that users can authenticate.

To connect the Work Folders server to your on premise network, use the Windows Azure Virtual Network and configure a site-to-site VPN. This makes the Work Folders VM in Windows Azure part of your network and enables it to communicate directly and securely with your on-premises network. This will allow all clients that are in your on-premises network to sync with the Work Folders VM in Windows Azure.

 More information on Windows Azure Virtual Network can be found here:

http://msdn.microsoft.com/en-us/library/windowsazure/jj156007.aspx

 

Here is a diagram of a sample topology, I will use this topology as a reference for the rest of the blog:

In the topology above, the Work Folders server is configured on a virtual machine in Windows Azure, and is connected to the corporate network using Windows Azure Virtual Network’s Site-to-Site VPN, which allows it to be joined to a corporate domain. User devices in the corporate network are able to sync with the Work Folders server just as they would with a server on-premises.

 

Note that in this topology, there is no access to the Work Folders server from outside of the corporate network. For access from the Internet, you can deploy a web reserve proxy, either on-premises, or as a Windows Azure VM. For more details, please refer to the “Designing a Work Folders Implementation” guide at:

http://technet.microsoft.com/library/dn479242.aspx

 

 

Configure storage for the Work Folders virtual machine

To store user data and sync metadata, you have to attach a disk to the virtual machine that is running the Work Folders.

To learn how to do this, follow the instructions here:

http://www.windowsazure.com/en-us/manage/windows/how-to-guides/attach-a-disk/

 

Configuring the Work Folders virtual machine

Once the virtual machine is ready, and you have configured cross-premises connectivity, you can join the server to an on-premises domain, and configure a test deployment of Work Folders by following the instructions here:

Deploying Work Folders

Work Folders Test Lab Deployment (blog post)

 

Summary 

Setting up Work Folders servers on Windows Azure virtual machines helps you quickly deploy a solution for your users, without having to purchase and manage additional on-premises hardware, while benefiting from the availability and support SLA that Windows Azure provides. With Windows Azure VMs, you can also grow the size of the deployment over time, as you on-board additional users in your enterprise.

Thanks
Sid

Deploying Work Folders with AD FS and Web Application Proxy (WAP)

$
0
0

 

Overview

Work Folders is a new component introduced in Windows Server® 2012 R2. It allows Information Workers to sync their work files between their devices. This functionality is powered by the Work Folders service that can be enabled as a Windows Server 2012 R2 File Services sub role.

Web Application Proxy is a new Remote Access role service in Windows Server® 2012 R2. Web Application Proxy provides reverse proxy functionality for web applications inside your corporate network to allow users on any device to access them from outside the corporate network. Web Application Proxy pre-authenticates access to web applications using Active Directory Federation Services (AD FS), and also functions as an AD FS proxy.

Setting up and configuring systems can be some of the most time consuming and tedious part of the job. This blog is aimed at making the deployment of Work Folders, AD FS and Web Application Proxy in Windows Server 2012 R2 to be as easy as possible.

This blog post will provide walk-through instructions on how to set up Work Folders with AD FS and Web Application Proxy (WAP) for production and test environments. The goals of the blog are:

  • provide a walk-through of deploying and setting up an environment that will be what most customers will like to setup in a real-world environment
  • provide a detailed installation of configuring the environment via the UI
  • provide scripts that will let the entire environment be configured in less than 15 minutes.

This blog will be longer than average as one of the goals of the blog is to provide a complete documented end-to-end overview of deploying Work Folders with AD FS and WAP.

A set of powershell scripts are also  provided that automate all of the steps in this blog for a Test environment, which includes creating the self-signed certs needed for AD FS and Work Folders. Once you have received your certificates (from the CA of your choice), the environment can be changed over for production use. Of course, sometimes you just want to setup an environment of product as quickly as possible without waiting to get certificates from a CA or having to setup a certificate server.

Download scripts here

The scripts will created the VMs for non-domain joined and domain joined machines using an unattended xml that is dynamically generated using values from the configuration file.

 

The scripts makes the deployment of the entire environment less than 12 minutes . Of course, your mileage may vary +/- a couple of minutes depending on the hardware that you are using. The test environment consists of AD FS, Work Folders, WAP and two test clients (one domain joined and another non-domain joined). Work Folders will be configured with a default Sync Share that all users added and also setup to use AD FS. The sync share will have SMB access enabled and also have the shares setup to require encryption. AD FS will be setup with the Relying Party information needed to talk to Work Folders. WAP will be setup to use the AD FS endpoints so that clients on the internet/external network can use AD FS.   The scripts are also written to use remote powershell for the deployment of the environment.  This means that the entire environment can be setup non-interactively from the Host machine and you do not need to log into each VM. 

If you are interested in just getting to the powershell scripts and setting up your environment that way, The section on the scripts is located near the end of the blog

 

Pre-requisite

In this blog post, I’ll provide you with the step by step guide to set up Work Folders with AD FS and WAP. When finished, you will have a complete functioning environment configured with self-signed certificates.

To follow this step by step guide, you will need to have the following computers ready:

  • Domain controller: a server with the Active directory Domain Services role enabled, and configured with a domain (for example: Contoso.com) and running Windows Server 2012R2. A Domain controller running at least Windows Server 2012R2 is only needed to support Workplace Join, which will we also be setting up.
  • Two servers joined to the domain (Contoso.com). Each machine runs Windows Server 2012 R2 build. (one machine will be for AD FS and another for Work Folders)
  • One Server that is not domain-joined. This server will run WAP and will need one network card for the Network domain (Contoso.com) and another network card that will be external domain (Fabrikam.com)
  • One domain-joined client computer running Windows 8.1
  • One non-domain joined client computer running Windows 8.1 on the Fabrikam’s virtual network

Download Windows Server 2012 R2 eval versions here:

http://technet.microsoft.com/en-US/evalcenter/dn205286.aspx

Download Windows 8.1 Enterprise eval versions here:

http://technet.microsoft.com/en-US/evalcenter/hh699156

The computers can be physical or VMs. In the event that you do not have those computers already, you can run the provisioningEnvironment.ps1 in the attached zip and it will create all of the required VMs needed, with the exception of the Domain Controller. Before running the script, be sure to edit the file vms.txt to update the network information appropriately. Instructions for editing the scripts are that the end of this blog.

Once done, you’ll have a topology that looks like the below:

image

This blog will focus on setting up WAP, AD FS and Work Folders and WAP. In our scenario, WAP will not being joined to the domain. WAP can be deployed either deployed to a domain or not. There are some deployments in which you need you might wish to have WAP joined to a domain such as using Windows Authentication. If you plan to use Integrated Windows authentication, the Web Application Proxy server must be joined to an AD DS domain and you must also configure Kerberos Constraint Delegation (KCD).

Setup

For setting up the environment we will setup the environment in the following order:

· AD FS

· Work Folders

· Web Application Proxy

· The domain-joined workstation and non-domain joined workstation

We are going to assume you have the following setup before you dive in.

  • Active Directory Forest (preferably with DNS enabled, but not required) running Windows Server 2012R2. Windows Server 2012 R2 is only required if you wish to use device registration for Work Place join, otherwise you need can use Windows Server 2012
  • A VM/Server for AD FS Server running Windows Server 2012 R2
    • Domain Joined
    • Valid network configuration
  • A VM/Server for Work Folders running Windows Server 2012 R2
    • Domain Joined
    • Valid network configuration
  • A VM/Server for Web Application Proxy running Windows Server 2012 R2
    • Domain Joined
    • Valid network configuration

The scripts provisionEnvironment and setupEnvironment will do the following:

· create all VMs needed for the lab

· domain join the machines that need to be domain joined

· install and fully configure the respective server roles on AD FS, Work Folders and WAP

· create and install the self-signed certificates on all appropriate machines

The end result is that you will have a completely deployed environment that is configured with self-signed certificates. The two client machines (one domain joined on Contoso corpnet and another non-domain joined on an external network) will be ready to start using Work Folders either on-premise or through WAP. All that remains is for the user to initiate the Work Folders join process.

Setting up AD FS

Pre-Install Work

Before beginning to install AD FS, there are two things that you should do ahead of time that will save you valuable time and make the setup process quicker.

The first item that must be setup is an Active Directory domain administrator account for the AD FS service to run under. For the test environment in the blog we will be using the default contoso\administrator account. Using the default admin account is something not recommended for production. Depending on your company polices, requesting and receiving a domain admin account may take some time, so be sure to get this done up front.

The second item that you will need to successfully setup AD FS is to have a SSL SAN certificate for server authentication. For production you will want to use a publically trusted certificate.

There are many commercial CAs from which you can purchase the certificate from. You can find the CAs trusted by Microsoft on this page: http://support.microsoft.com/kb/931125. Another alternative is to get a certificate from the company Enterprise CA.

To purchase certificate, you need to visit the CA’s website, and follow the instructions on the website. Each CA has a different procedure on certificate purchase.

For our test environment we will be using a self-signed certificate. It is important to note that AD FS does not support CNG certificates, which means that you cannot create the self-signed certificate using the powershell cmdlet New-SelfSignedCertificate. The attached zip file contains a powershell script called makecert.ps1 which will create a self-signed certificated that will work with AD FS and prompt for the SAN names that will be needed to create the certificate.

Creating AD FS Self-signed certificate
    1. Copy the file makeCert.ps1 to the AD FS machine.
    2. Open a Admin PowerShell window
    3. Set the execution policy to unrestricte
      1. PSC:\temp\scripts>.\makecert.ps1C:\temp\scripts>Set-ExecutionPolicyExecutionPolicy  Unrestricted
    4. Change to the directory where the script was copied
    5. Execute the makeCert script
      1. PSC:\temp\scripts>.\makecert.ps1
    6. When prompted to change the subject certificate, enter the new value for the subject. In our case it will be blueadfs.contoso.com.
    7. When prompted to enter SANS, enter Y and then enter the SAN names (one at a time)i.e. blueadfs.contoso.com <return> enterpriseregistration.contoso.com. When all of the SAN names have been entered, press return on an empty line
    8. When prompted to install the certificates to the Trusted Root Certification Authority, press Y

image

The AD FS certificate needs to be a SAN certificate with the following values:

<AD FS service name>.<domain>

enterpriseregistration..<domain>

<machine name>.<domain>

e.g.

blueadfs.contoso.com

enterpriseregistration.contoso.com

2012R2-ADFS.contoso.com

The enterpriseregistration SAN is needed for Work Place join.

Setting up Server IP address

Change your Server IP to static IP, for this blog I’m using IP class A which is 192.168.0.160 / subnet mask : 255.255.0.0 / Default Gateway : 192.168.0.1 / Preferred DNS : 192.168.0.150 (the IP address of your Domain Controller)

image

Install AD FS
Log into the VM or machine that you plan to install AD FS and open the Add Roles and Features Wizard.

To install AD FS, select the Active Directory Federation Services Role under Server Roles and click next

clip_image002

If you plan on using your AD FS server as part of a Hybrid Deployment and will be using DirSync, select install the .Net Framework 3.5 features. To install .Net 3.5, you’ll need to mount the Windows Server ISO as a DVD drive in your VHD.

On the confirmation page you will see a note informing you that the Web Application Proxy role cannot be installed on the same computer as AD FS. Click Next.

clip_image004

If you did not choose you install .Net 3.5, you’ll see a confirmation screen that looks like the below.

clip_image006

If you choose to install .Net 3.5, you’ll see a confirmation screen that looks like the below. Make sure to provide an alternate source path to the Windows Server 2012 R2 iso. i.e. d:\sources\SxS

clip_image008

If you do did not mount the Windows Server 2012 R2 iso, you’ll receive an error screen like the below and you will need to restart the entire install process for AD FS.

clip_image010

To accomplish the equivalent install of AD FS through powershell, use the below commands:

    Add-WindowsFeatureRSAT-AD-Tools
Add-WindowsFeatureAD FS-Federation–IncludeManagementTools
# Only install the .Net 3.5 features if you plan to use DirSync
# Replace E:\sources with the location to the mounted iso in your VHD.
    Add-WindowsFeatureNET-Framework-Features–Source<drive>:\sources\SxS

Configuring AD FS

The next step is to configure AD FS.

Select the Warning Flag at the top of Server manager and click on the link that says “Configure the federation service on this server”.

clip_image012

The link will launch the Active Directory Federation Services wizard. The first page after the Welcome screen will be for the domain administrator account that will be used as the dedicated AD FS account.

Enter the subject name of the SSL certificate to be used for AD FS communication and set the name of the Federation Service Name. It is important to note that for the Federation Service name to not use the name of an existing server in the environment. If you do use the name of an existing machine, the AD FS installation will fail and will need to be restarted again.

clip_image014

Enter the name that you would like to be used for the Managed Service Account.

clip_image016

Select use the Windows Internal Database and press Next.

clip_image018

The review screen will show you an overview of the options you have selected. Press next.

clip_image020

The next screen will be the Pre-requisites check page. If everything shows as good, press Configure.

clip_image022

If you did you the name of the AD FS server or the name of another existing machine, you’ll receive the below error message. You should start the install over and choose a name other than the name of the AD FS machine or any existing machine.

clip_image024

When the configuration completes successfully you’ll see the below screen indicating that AD FS was successfully configured.

clip_image026

Here’s how to accomplish the same via powershell:

Install AD FS
    Add-WindowsFeatureRSAT-AD-Tools
    Add-WindowsFeatureADFS-Federation-IncludeManagementTools

Create the Managed Service Account
    New-ADServiceAccount"ADFSService"-Server 2012R2-DC.contoso.com -Path"CN=Managed Service Accounts,DC=Contoso,DC=COM"   -DNSHostName 2012R2-ADFS.contoso.com -ServicePrincipalNamesHTTP/2012R2-ADFS,HTTP/2012R2-ADFS.contoso.com

Setup the AD FS Farm

Setting up the AD FS will use the Managed Service account used above and the certificate you created in the pre-configuration steps.

     $cert=Get-ChildItemCERT:\LocalMachine\My|where {$_.Subject -matchblueadfs.contoso.com} |sort$_.NotAfter -Descending|select-first1 
     $thumbprint=$cert.Thumbprint
     Install-ADFSFarm-CertificateThumbprint$thumbprint-FederationServiceDisplayName"Contoso Corporation" –FederationServiceNameblueadfs.contoso.com -GroupServiceAccountIdentifiercontoso\ADFSService$-OverwriteConfiguration-ErrorActionStop

AD FS Post-configuration

There are four post configuration work after setting up and configuring AD FS. They are:

  • Configuring DNS entries
  • Setting up Relying Trust for Work Folders
  • Exporting the AD FS certificate to install on the other machines
  • Configuring access to the private key
Configure DNS entries

You will need to create two DNS entries for AD FS. These are the same two entries that were used in the Pre-Configuration steps when the SAN certificated was created.

<AD FS service name>.<domain>

enterpriseregistration..<domain>

<machine name>.<domain>

e.g. for our setup

blueadfs.contoso.com

enterpriseregistration.contoso.com

Create the A/Cname records on AD for AD FS

On your DC, open up the DNS manager

clip_image028

Expand the Forward Lookup Zones, right click on your domain and select New Host (A).

clip_image030

The New Resource Record form will open. Enter in the alias for AD FS in the Alias name field. In the lab case it will be “blueadfs”. The alias must be the same as what was used as the subject in the certificate used for AD FS earlier. Ie. If the subject was adfs.contso.com, then the alias entered here must be adfs. In the IP field, enter the IP address for the AD FS Server, i.e 192.168.0.160.

clip_image032

It’s *important* to note that when setting up AD FS via the UI instead of via powershell, you must create an A record instead of a CNAME record. The reason is that the SPN that gets created via the UI will only contain the alias that is used in setting up the AD FS Service as the HOST/.

clip_image034

In the powershell script, we created the SPN by including the alias as the HOST/ but also include setting two HTTP/ entries. These entries allow the SPN to redirect from the alias to the host machine.

clip_image036

Add in another alias (New Alias (CNAME) )name for the ‘enterpriseregistration’ cname. This ‘enterpriseregistration’ alias is used for Device Join and must be called ‘enterpriseregistration.

clip_image038

The powershell command to accomplish the above is below. The command must be executed on your Domain controller.

Add-DnsServerResourceRecord  -ZoneName"contoso.com"-Nameblueadfs-A-IPv4Address192.168.0.160

Add-DnsServerResourceRecord  -ZoneName"contoso.com"-Nameenterpriseregistration-CName  -HostNameAlias2012R2-ADFS.contoso.com
Setup AD FS Relying Trust for Work Folders

We can setup/configure the Relying Trust for Work Folders, even though Work Folders has not been setup yet. The Relying Trust must be setup for Work Folders to use AD FS. Since we’re setting up AD FS, it is a good time to do this step.

Select AD FS Management

On the Right panel, under Actions, click on Add Relying Party Trust Wizard

clip_image040

The first page of the Wizard is a Welcome screen. Click Next to start the wizard

clip_image042

Select Enter data about the relying party manually

clip_image044

Enter “WorkFolders” In the Display name field and click Next.

clip_image046

Select the AD FS profile option for creating the relying party trust.

clip_image048

Click Next on the Configure Certificates page. These certificates area used as the optional token encryption certificates, and not needed for our configuration.

clip_image050

On the “Configure URL” screen, just click next.

clip_image052

On the “Configure Identifier page”, set the Target identifier to https://windows-server-work-folders/V1 . This identifier is a hard coded value used by Work Folders, and is sent by the Work Folders service when it is communicating with AD FS.

clip_image054

Click Next on the Configure Multi-factor authentation page.

clip_image056

On the Issuance Authorization page, select “Permit all users to access the relying party”

clip_image058

Click Next on the Read to Add trust page.

clip_image060

After the configuration is finished the last page of the Wizard will display and show that the configuration was successful.

clip_image062

Select the checkbox for editing the Claims Rules and press close.

The Edit Claim Rules form will now open.

clip_image064

In the Claim Rules type, select “Send LDAP Attributes as Claims”

clip_image066

On the Transform claim page, do the following:

  • Add Transform claim rule Name: WorkFolders
  • Select Active Directory for the Attribute Store

Do the following:

Ldap map

User-Principal-Name   UPN

Display Name  –> Name

Surname  –> Surname

Given-Name  –> Given Name

clip_image068

Click Finish when done and you’ll now see your new rule showing up in the Issuance Transform rules page.

clip_image070

Next, click the Issuance Authorization Rules tab and you’ll see that the rule for access is set to “Permit Access to All Users”

clip_image072

Final steps

Once all this is done, we need to finish the configuration by running four commands in PowerShell. This four commands set needed for Work Folders to successfully communicate with AD FS. These options on the Relying Partying cannot be set through the UI. The options

  • enable the use of JWT
  • disable encrypted claims
  • enable auto-update
  • sets issuing Oauth refresh tokens to All Devices.

 

Set-ADFSRelyingPartyTrust -TargetIdentifier “https://windows-server-work-folders/V1”
 -EnableJWT  $true
 
Set-ADFSRelyingPartyTrust -TargetIdentifier “https://windows-server-work-folders/V1”
 -Encryptclaims  $false
 
Set-ADFSRelyingPartyTrust -TargetIdentifier “https://windows-server-work-folders/V1”
 -AutoupdateEnabled  $true
 
Set-ADFSRelyingPartyTrust -TargetIdentifier “https://windows-server-work-folders/V1”
 -IssueOAuthRefreshTokensTo  AllDevices

Enabling Work Place Join

To enable device registration for Work Place join, you will need to run the following three powershell calls, which will configure device registration and set the global authentication policy.

 

    Initialize-ADDeviceRegistration-ServiceAccountName<your AD FS service name>
    Enable-ADFSDeviceRegistration
    Set-ADFSGlobalAuthenticationPolicy-DeviceAuthenticationEnabled$true

Exporting Certificate

The self-signed AD FS certificate will need to be exported and installed on the following machines in the test environment:

  • Work Folders
  • Web Application Proxy
  • Domain joined Windows 8.1 client
  • Non-domain joined Windows 8.1 client

To export the certificate:

· 1.  Click Start, and then click Run.
2.  Type MMC.
3.  On the File menu, click  Add/Remove Snap-in.
4.  In the Available snap-ins list, select Certificates, and then click Add. The Certificates Snap-in Wizard starts.
5.  Select Computer account, and then click Next.
6.  Select Local computer: (the computer this console is running on), and then click Finish.
7.  Click OK.
8.  Expand Console Root\Certificates (Local Computer)\Personal\Certificates.
9.  Right-click Certificates, click All Tasks, and then click Export...

clip_image074

Go through the Certificate Export Wizard

clip_image076

Export the private key

clip_image078

Use the default options

clip_image080

Enter in a password for the certificate. Remember what password you use as you’ll need to use the pass later when you import the certificate to the other devices.

clip_image082

Enter in a location and name for the certificate and then finish the wizard.

clip_image084

Managing Private Key Setting

The AD FS service account needs the permissions to access the private key of the new certificate. You should do this again when you replace the communication certificate after it expires. To do this, follow these steps:

1. Click Start, and then click Run.

2. Type MMC.

3. On the File menu, click  Add/Remove Snap-in.

4. In the Available snap-ins list, select Certificates, and then click Add. The Certificates Snap-in Wizard starts.

5. Select Computer account, and then click Next.

6. Select Local computer: (the computer this console is running on), and then click Finish.

7. Click OK.

8. Expand Console Root\Certificates (Local Computer)\Personal\Certificates.

9. Right-click Certificates, click All Tasks, and then click Manage Private Keys.

10. Add the account that is running the AD FS Service, and then give the account at least read permissions

If you do not have the option to manage private keys, you may have to run the following command:
certutil -repairstore my *

Screenshots

clip_image085

clip_image087

clip_image089

clip_image091

Verifying AD FS is Operational

1. https://blueadfs.contoso.com/federationmetadata/2007-06/federationmetadata.xml.

In your browser window, if you can see the federation server metadata without any Secure Socket Layer (SSL) errors or warnings, your federation server is operational.

clip_image093

clip_image095

2. You can also browse to the AD FS sign-in page where your federation service name is appended with adfs/ls/idpinitiatedsignon.htm, for example, https://blueadfs.contoso.com/adfs/ls/idpinitiatedsignon.htm. This entry displays

clip_image097

clip_image099

clip_image101

Setting up Work Folders

Pre-Install Work
  • A VM/Server for Work Folders running Windows Server 2012 R2
    • Domain Joined
    • Valid network configuration

For our test environment, we have joined the VM that will be running Work Folders to the Contoso domain and setup the network interface as below. If you are setting up VMs, remember that the Default Gateway is the IP address of the Virtual Network adapter on the Host Machine (192.168.0.1 in our case)

Setting up Server IP address

Change your Server IP to static IP, for this blog I’m using IP class A which is 192.168.0.170 / subnet mask : 255.255.0.0 / Default Gateway : 192.168.0.1 / Preferred DNS : 192.168.0.150 (the IP address of your Domain Controller)

Create the cname records on AD for Work Folders

On your DC, open up the DNS manager

Expand the Forward Lookup Zones, right click on your domain and select New Alias (CNAME)

The New Resource Record form will open. Enter in the alias “workfolders” in the Alias name field. In the Fully qualified domain name field, enter the fully qualified domain for the Work Folders Server, i.e 2012R2-WF.contoso.com

The powershell command to accomplish the above is below. The command must be executed on your Domain controller.

 

Add-DnsServerResourceRecord  -ZoneName"contoso.com"-Nameworkfolders-CName  -HostNameAlias2012R2-wf.contoso.com
Install AD FS certificate
Copy the AD FS certificate that was created to when setting up AD FS and install the AD FS certificate into the local computer certificate store. To do this, follow these steps:

1.  Click Start, and then click Run.
2.  Type MMC.
3.  On the File menu, click  Add/Remove Snap-in.
4.  In the Available snap-ins list, select Certificates, and then click Add. The Certificates Snap-in Wizard starts.
5.  Select Computer account, and then click Next.
6.  Select Local computer: (the computer this console is running on), and then click Finish.
7.  Click OK.
8.  Expand Console Root\Certificates (Local Computer)\Personal\Certificates.
9.  Right-click Certificates, click All Tasks, and then click Import.

For our test environment we will be using a self-signed certificate. It is important to note that AD FS does not support CNG certificates, which means that you cannot create the self-signed certificate using the powershell cmdlet New-SelfSignedCertificate. The attached zip file contains a powershell script called makecert.ps1 which will create a self-signed certificated that will work with AD FS and prompt for the SAN names that will be needed to create the certificate.

Creating Work Folders Self-signed certificate

1. Copy the file makeCert.ps1 to the AD FS machine.

2. Open a Admin PowerShell window

3. Set the execution policy to unrestricted

PSC:\temp\scripts>Set-ExecutionPolicy-ExecutionPolicyUnrestricted

4. Change to the directory where the script was copied

5. Execute the makeCert script

PSC:\temp\scripts>.\makecert.ps1

6. When prompted to change the subject certificate, enter the new value for the subject. In our case it will be workfolders.contoso.com.

7. When prompted to enter SANS, enter Y and then enter the SAN names (one at a time)i.e. workfolders.contoso.com <return> 2012R2-WF.contoso.com <return> . When all of the SAN names have been entered, press return on an empty line

8. When prompted to install the certificates to the Trusted Root Certification Authority, press Y

The Work Folders certificate needs to be a SAN certificate with the following values:

workfolders.<domain>

<machine name>.<domain>

e.g.

workfolders.contoso.com

2012R2-WF.contoso.com

Install Work Folders

Start the "Add Roles and Features Wizard” and select Next.

Select Role-based or feature based installation

Select the current server and press Next.

On Server Roles, open the File and Storage Services Role, expand the File and iSCSI Services role and select Work Folders.

Optionally also install the IIS Management tools. These tools will provide some powershell management scripts that will make binding the Work Folders certificate easier. Binding the Work Folders certificate to the port being used for SSL will require using powershell or the cmdline.

One the Role Services page for IIS, just enable the Management scripts and Tools.

On the confirmation page, press Install

Configure Work Folders

After Work Folders has been installed, we will need to complete the setup by configuring Work Folders.

Open Server Manager, Select File and Storage Services and Work Folders.

Click on the link in the Work Folders window to create a sync share.

Click next on the first screen.

Select the server and enter a path for the Work Folders data to be stored.

If the path does not exist, you’ll be prompted if you want to create it.

On the User Folder Structure page, select User alias and then press Next

On the Sync Share Name tab enter the name for the sync share. For our example we used “WorkFolders”. Press Next

On the Sync Access page, add in the users or groups to have access to the new Sync Share. For our example we have granted access to all domain users. Press Next when done.

On the Device Policies page, select Encrypt Work Folders and Automatically lock screen and require a password. Press Next.

On the Confirmation page, press Create to finish configuring Work Folders.

Post-Install Work

To finish setting up Work Folders there are two additional steps that will need to be done.

· Binding the Work Folders cert to the SSL port

· Configuring Work Folders to use AD FS authentication

· Exporting Work Folders certificate (if using a self-signed certificate)

Binding Cert

Work Folders only communicates over SSL and will need to have the self-signed cert created earlier (or your CA issues certificate) bound to the port.

To bind the certificate to the port by powershell there are two methods. One method is to use the IIS cmdlets and another is to use nethsh

Binding with netsh

To use netsh in powershell you need to pipe the command to netsh .  Below is an example that will find the cert with the subject, “workfolders.contoso.com” and bind it to 443 with  netsh

 

        $subject="workfolders.contoso.com"
     Try
    {
     #In case there are multiple certs with the same subject, get the lastest version
     $cert=Get-ChildItemCERT:\LocalMachine\My|where {$_.Subject -match$subject} |sort$_.NotAfter -Descending|select-first1 
     $thumbprint=$cert.Thumbprint
     $Command="http add sslcert ipport=0.0.0.0:443 certhash=$thumbprint appid={CE66697B-3AA0-49D1-BDBD-A25C8359FD5D} certstorename=MY"
     $Command|netsh
    }
    Catch
    {
      "        Error: unable to locate certificate for $($subject)"
        Exit
    }

Binding with IIS cmdlets

A second way that it can be done is with the IIS management cmdlets, which can be used if you installed the web-management tools and scripts.  Note that this does not enable full IIS on the Work Folders machine and only enables the management cmdlets.  There are some possible benefits towards doing this, say if you are looking for cmdlets to provide what you get from netsh..   When the cert gets bound to the port via the New-WebBinding cmdlet, it is bound to the port and the binding is not dependent on IIS in any way.  After doing the binding you can even remove the Web-Mgmt-Console feature and the cert will still be bound to the port, and verifiable via netsh by typing netsh http show sslcert.

Below is an example that works with Work Folders using New-WebBinding that will find the cert with the subject, “workfolders.contoso.com” and bind it to 443

 

   $subject="workfolders.contoso.com"
    Try
{
#In case there are multiple certs with the same subject, get the lastest version
     $cert=Get-ChildItemCERT:\LocalMachine\My|where {$_.Subject -match$subject } |sort$_.NotAfter -Descending|select-first1  
     $thumbprint=$cert.Thumbprint
New-WebBinding-Name"Default Web Site"-IP*-Port443-Protocolhttps
#The default IIS website name needs to be used for the binding, since Work Folders uses Hostable Web Core and its own config file, its website name, ‘ECSsite’  will not work with the cmdlet, so the workaround would be to use the default IIS website name, even though IIS is not enabled, as the NewWebBinding cmdlet looks for an site in the default IIS config file  
 Push-LocationIIS:\SslBindings
     Get-Itemcert:\LocalMachine\MY\$thumbprint|new-item*!443
     Pop-Location
    }
    Catch
    {
              "        Error: unable to locate certificate for $($subject)"
        Exit
    }

Setup AD FS Authentication

To configure Work Folders to use AD FS for authentication you will need to open Server Manager. Select Servers, select your Server in the main panel and right click on the Server to bring up the context menu. On the context menu select Work Folders Settings.

On the Work Folder Settings window, select Active Directory Federation Services and type in the Federation Service URL and click apply. In our example it is

https://blueadfs.contoso.com

The cmdlet to accomplish via powershell is:

Set-SyncServerSetting-ADFSUrl"https://blueadfs.contoso.com"

If you are setting up AD FS with self-signed certs, you may receive the below error message about Federation Service URL being incorrect, unreachable or a relying party trust has not been setup.

This error can also happen if the AD FS cert has not been installed on Work Folders or if the CNAME for AD FS was not setup correctly.

Exporting Certificate

The self-signed Work Folders certificate will need to be exported and installed on the following machines in the test environment:

  • Web Application Proxy
  • Domain joined Windows 8.1 client
  • Non-domain joined Windows 8.1 client

To export the certificate:

· 1.  Click Start, and then click Run.
2.  Type MMC.
3.  On the File menu, click  Add/Remove Snap-in.
4.  In the Available snap-ins list, select Certificates, and then click Add. The Certificates Snap-in Wizard starts.
5.  Select Computer account, and then click Next.
6.  Select Local computer: (the computer this console is running on), and then click Finish.
7.  Click OK.
8.  Expand Console Root\Certificates (Local Computer)\Personal\Certificates.
9.  Right-click Certificates, click All Tasks, and then click Export...

Go through the Certificate Export Wizard

Export the private key

Use the default options

Enter in a password for the certificate. Remember what password you use as you’ll need to use the pass later when you import the certificate to the other devices.

Enter in a location and name for the certificate and then finish the wizard.

Setting up Web Application Proxy

Pre-Install Work
Install AD FS and Work Folder certificates
Copy the AD FS and Work Folders certificate that were respectively created to when setting up AD FS and Work Folders to the machine to install the Web Application Proxy Role and install the certificates into the local computer certificate store. To do this, follow these steps:

1.  Click Start, and then click Run.
2.  Type MMC.
3.  On the File menu, click  Add/Remove Snap-in.
4.  In the Available snap-ins list, select Certificates, and then click Add. The Certificates Snap-in Wizard starts.
5.  Select Computer account, and then click Next.
6.  Select Local computer: (the computer this console is running on), and then click Finish.
7.  Click OK.
8.  Expand Console Root\Certificates (Local Computer)\Personal\Certificates.
9.  Right-click Certificates, click All Tasks, and then click Import.

10.  Expand Console Root\Certificates (Local Computer)\Trusted Root Certification Authorities\Certificates.
11.  Right-click Certificates, click All Tasks, and then click Import.

Install Web Application Proxy

On the Server that you plan to install the Web Application Proxy, open Server Manager and start the Add Roles and Features Wizard.

Click Next on the first page of the wizard

Click Next on the Second page of the wizard

Select you server on the third page of the wizard and click Next

On the Server Roles Page, select the Remote Access Role

On the Role Services page, select Web Application Proxy

A confirmation dialogue will immediately pop-up, press Add Features.

Configure Web Application Proxy

To configure the Web Application Proxy, select the Warning flag at the top of Server Manager which will show a link to open the Web Application Proxy Wizard

On the Welcome screen press Next

On the Federation Server enter the address of the Federation Service name. For our example it is blueadfs.contoso.com.

The wizard also asks for the credentials of local administrator account on the federation service account. Do not enter in domain credentials i.e. contoso\administrator, but local credentials i.e. just administrator

On the AD FS Proxy certificate page, select the AD FS certificate that was imported earlier and press Next.

In our case it is blueadfs.contoso.com

A confirmation page will display showing the powershell command that will execute to configure the service. An option for enabling the Web Application proxy to use OAuth is not available through the Wizard and needs to be done via powershell. The step is shown in the post-configuration section.

If OAuth is not enabled on the Web Application Proxy, your clients connecting to the Web Application Proxy will receive an error stating that the “Data in the stream is not in the proper format (0x80c81000)”

clip_image086

The results page will display once configuration is complete. Close the window.

clip_image088

The next step is to publish the Work Folders Web Application.

On Server Manager, go to Tools, Remote Access to open the Remote Access Management Console.

Select Web Application Proxy under configuration on the left side of the management console and then click Publish under Tasks on the right. The Publish New Web Application Wizard will open.

Publish Work Folders Web Application

On the Welcome page, click Next.

clip_image090

On the Preauthentication page of the Wizard, select Active Directory Federation Services.

clip_image092

On the Relying Party, Select Work Folders and press Next. This list is published to the Web Application proxy from AD FS.

clip_image094

On the Publishing Settings page, enter the following:

  • the name you would like for the Web application
  • the external URL for Work Folders
  • Select the Work Folders certificate
  • The back-end URL for Work Folders

By default, the wizard will make the back-end URL the same as the external URL.

For our example we will use the following values:

Name: WorkFolders

External URL: https://workfolders.contoso.com

External certificate: the Work Folders certificate installed earlier

Backend server URL: https://workfolders.contoso.com

clip_image096

The confirmation page will show the equivalent powershell script to do the configuration. Press Publish to finish publishing the Work Folders Web Application.

clip_image098

If the publishing is successful, you’ll see a confirmation page like below.

clip_image100

Post-Install Work

To finish setting up the Web Application Proxy there is one additional step that must be done. The Web Application Proxy must be configured to use OAuth. To do so, open up an admin powershell window on the Web Application Proxy machine and execute the following command.

 

Get-WebApplicationProxyApplication-Name<web app name>|Set-WebApplicationProxyApplication  -UseOAuthAuthentication 

For our example, use the following command.

 

Get-WebApplicationProxyApplication-NameWorkFolders|Set-WebApplicationProxyApplication  -UseOAuthAuthentication 

As mentioned earlier, failure to do so will cause applications connection through the Web Application Proxy an error that “Data in the stream is not in the proper format (0x80c81000)”.

Setting up Domain Joined Windows 8.1 client

Pre-Install Work
Install AD FS and Work Folder certificates
Copy the AD FS and Work Folders certificate that were respectively created to when setting up AD FS and Work Folders to the machine to install the Web Application Proxy Role and install the certificates into the local computer certificate store. To do this, follow these steps:

1.  Click Start, and then click Run.
2.  Type MMC.
3.  On the File menu, click  Add/Remove Snap-in.
4.  In the Available snap-ins list, select Certificates, and then click Add. The Certificates Snap-in Wizard starts.
5.  Select Computer account, and then click Next.
6.  Select Local computer: (the computer this console is running on), and then click Finish.
7.  Click OK.
8.  Expand Console Root\Certificates (Local Computer)\Personal\Certificates.
9.  Right-click Certificates, click All Tasks, and then click Import.

10.  Expand Console Root\Certificates (Local Computer)\Trusted Root Certification Authorities\Certificates.
11.  Right-click Certificates, click All Tasks, and then click Import.

Configuration

Open Control Panel and select Work Folders.

clip_image103

Select Set up Work Folders

clip_image105

Work Folders can then be setup using either the user’s email address (mbutler@contoso.com) or with the Work Folders URL (https://workfolders.contoso.com). Setup up Work Folders by the mechanism of your choice and press Next.

clip_image107

clip_image109

You will then be presented with a credentials prompt from AD FS. When on-prem, the prompt will be done using Windows Integrated Authentication. The challenge screen from off-prem clients will be with OAuth and users will see a different screen.

clip_image111

After you have authenticated, the Introducing Work Folders windows will display. On this page you can change the Work Folders document location. Press Next.

clip_image113

You will then see a window displaying the security policies that we had setup on Work Folders. Press Next.

clip_image115

You will then see a message that Work Folders has started syncing with your PC. Press Close.

clip_image117

The manage Work Folders panel will open showing you the amount space, sync status, etc. On this panel you can re-enter your credentials if needed. Go ahead and close the window.

clip_image119

Your Work Folders folder will also automatically open. From here you can start adding content to sync between your devices. We’ll go ahead and add in a Test file on the domain joined machine and setup Work Folders on our non-domain joined machine.

clip_image121

Setting up non-domain Joined Windows 8.1 client

Pre-Install Work
Install AD FS and Work Folder certificates
Copy the AD FS and Work Folders certificate that were respectively created to when setting up AD FS and Work Folders to the machine to install the Web Application Proxy Role and install the certificates into the local computer certificate store. To do this, follow these steps:

1.  Click Start, and then click Run.
2.  Type MMC.
3.  On the File menu, click  Add/Remove Snap-in.
4.  In the Available snap-ins list, select Certificates, and then click Add. The Certificates Snap-in Wizard starts.
5.  Select Computer account, and then click Next.
6.  Select Local computer: (the computer this console is running on), and then click Finish.
7.  Click OK.
8.  Expand Console Root\Certificates (Local Computer)\Personal\Certificates.
9.  Right-click Certificates, click All Tasks, and then click Import.

10.  Expand Console Root\Certificates (Local Computer)\Trusted Root Certification Authorities\Certificates.
11.  Right-click Certificates, click All Tasks, and then click Import.

Update Hosts file

The hosts file on the non-domain joined client will need to be updated for our demo environment.

The hosts file will need entries put in for:

workfolders..<domain>

<AD FS service name>.<domain>

enterpriseregistration.<domain>

For our example we will put in the following values:

10.0.0.10 workfolders.contoso.com

10.0.0.10 blueadfs.contoso.com

10.0.0.10 enterpriseregistration.contoso.com

clip_image123

Work Folders can then be setup using either the user’s email address (mbutler@contoso.com) or with the Work Folders URL (https://workfolders.contoso.com). Setup up Work Folders by the mechanism of your choice and press Next.

clip_image124

clip_image125

You will then be presented with a credentials prompt from AD FS. The challenge screen for the non-domain client will be using OAuth and be different from what users will see on-prem.

clip_image127

After you have authenticated, the Introducing Work Folders windows will display. On this page you can change the Work Folders document location. Press Next.

You will then see a window displaying the security policies that we had setup on Work Folders. Press Next.

clip_image128

You will then see a message that Work Folders has started syncing with your PC. Press Close.

clip_image129

The manage Work Folders panel will open showing you the amount space, sync status, etc. On this panel you can re-enter your credentials if needed. Go ahead and close the window.

clip_image130

Your Work Folders folder will also automatically open. From here you can start adding content to sync between your devices. What you will also see is that the file from our domain joined PC has already synced to the device.

And that’s it for setting up Work Folders, AD FS and WAP from the UI (mostly Smile ).

Setting up Work Folders, AD FS and WAP via attached scripts.

The attached scripts will enable to you setup and deploy the entire environment is less than 30 minutes. If you already have VMs/machines setup (i.e. domain joined, network setup, etc), then the configuration is less than 12 minutes. The whole process is IO bound so your mileage may vary. A colleague of mine has a very powerful laptop (i7 ivy bridge with 32gb of memory and a SSD) and his setup time is less than 4 ½ minutes.

Download scripts here

The scripts were written with the assumption that you are starting from a clean environment and already have a DC setup. The scripts are designed to run from the VM Host machine and use remote powershell to configure all of the machines in the environment, so you do not need to remote into any of the machines to set them up.

There are three main steps/scripts to execute:

· setHostNetworkAdapters.ps1 – this will setup the Virtual Switches on the Host and configure the virtual network adapters as the default gateway for each network

· provisionEnvironment.ps1 – this will create the VMs needed for the environment from the downloaded ISOs and also finish setting up the OS on each machine, which includes setting the VMs network addresses and domain-joining where appropriate using unattended xml files

· setupEnvironment.ps1 – this will setup and configure AD FS, Work Folders, WAP and the two client machines.

Pre-requisites

o http://technet.microsoft.com/en-US/evalcenter/hh699156

For the scripts, the IP address of the DNS server is set to 192.168.0.150. If your DC has a different IP address you will need to update the scripts accordingly

Configuring Virtual Switches

The script setHostNetworkAdapters.ps1 will create the virtual switches on the Host machine and also set the IP address, subnet and DNS address.

The network adapters should be setup as the gateway address for each of their respective virtual networks. This enables the Host to access the VMs and vice versa. This is essential to run remote powershell scripts from host against the VMs on the network.

The script has a function called setNetworkAdapter that will setup a virtual switch and configure it’s IP addresses, subnet and DNS values.

To call the function you will need to pass in:

· Name of switch to create

· IP address to use as the gateway (the gateway IP should be the first available address in the network, i.e. XXX.XXX.XXX.1)

· The octet for the subnet

· IP address of the DNS server (optional, but should be configured on the VM network where the DC will reside)

The script is currently configured to create switches for Contoso and Fabrikam

 

setNetworkAdapter"ID_AD_NETWORK-Contoso"  192.168.0.124192.168.0.150
 
setNetworkAdapter"ID_AD_NETWORK-FABRIKAM" 10.0.0.124 

After running the scripts, the virtual switches will be created and the network adapters will be configured.

Contoso

Fabrikam

clip_image132

clip_image134

If you already have a DC deployed, then you most likely already have at least one virtual switch and network setup. If that is the case, check to ensure that the IP address on the network adapter that the host is using for the Virtual network is configured as a gateway.

Provisioning the Environment

The script provisionEnvironment.ps1 will:

· Create a base referencing for disks for the Server and Client VMs. The Server base disk will be loaded with Windows Server 2012R2 Datacenter and the Client base disk will be loaded with Windows 8.1 Enterprise.

· Configure the network adapter(s) on each VM

· Domain join – where appropriate

· Enable CredSSP on each VM

You must run this script from a powershell window with Admin privileges or it will not work.

The first time that the script is run, it will take about 8 minutes to create each base differencing disk. On subsequent runs, you can reuse the previously created base VHD with no issue.

After the provisioning is complete you will have a set of differencing disks that look like the below. If you wish to recreate the environment, you can delete the differencing disks based off the base disk and reuse the existing base disk. The base disk is clean and only contains the OS.

Servers VHD

Client VHDs

clip_image136

clip_image138

The setup and configuration of the VMs is done with an unattended XML file that is loaded onto each VM after

Enabling CredSSP is done by creating and pushing a SetupComlete.cmd file into the directory Windows\Setup\Scripts . When the VM boots, it will execute the setupcomplete.cmd file

Configuring the Script

The script obtains the list of machines to build and the VM information from a csv file called vms.txt.

Here is what the file looks like.

machine,server,DJ,name,memory,network1,ip1,dns1,network2,ip2,dns2

DC,Y,Y,2013R2-DC,1524,ID_AD_NETWORK-FABRIKAM,192.168.0.150/24,127.0.0.1,,,

WAP,Y,N,2012R2-WAP,1524,ID_AD_NETWORK-Contoso,192.168.0.254/24,192.168.0.150,ID_AD_NETWORK-FABRIKAM,10.0.0.10/24,

ADFS,Y,Y,2012R2-ADFS,1524,ID_AD_NETWORK-Contoso,192.168.0.160/24,192.168.0.150,,,

WF,Y,Y,2012R2-WF,1524,ID_AD_NETWORK-Contoso,192.168.0.170/24,192.168.0.150,,,

client1,N,Y,client1,1524,ID_AD_NETWORK-Contoso,192.168.0.120/24,192.168.0.150,,,

client2,N,N,client2,1524,ID_AD_NETWORK-FABRIKAM,10.0.0.20/24,,,,

Here is the definition of the csv structure

Field

Definition

machine

Key value to identify row

server

Is the machine a Server, Y= yes, N=no

DJ

Is the machine Domain Joined, Y=yes, N=no

Name

The name

Memory

Amount of memory in MB

network1

Name of network to use for VMs first network adapter

ip1

IP address to use for the VMs first network adapter

dns1

DNS address to use for the VMs first network adapter

network2

Name of network to use for VMs second network adapter (optional)

ip2

IP address to use for the VMs second network adapter (optional)

dns2

DNS address to use for the VMs second network adapter (optional)

Script Variables

In the script there are also variables that are used for:

  • Location of the ISO files
  • Location to store the base disks
  • Location to store the VHDs that will be created
  • Domain
  • Domain admin name and password
  • Local admin name and password for the non-domain joined machines.
$serverISOpath="E:\isos\Windows_Server_2012_R2-Datacenter_Edition–§CEN-US-X64.ISO"
$serverInstallImage="Windows Server 2012 R2 SERVERDATACENTER"
$clientInstallImage="Windows 8.1 Enterprise"
$clientISOpath="E:\isos\Windows_8.1_Enterprise_EN-US_x64.ISO"
$serverDiffDiskpath="E:\vhdx\serverbase.vhdx"
$clientDiffDiskpath="E:\vhdx\clientbase.vhdx"
$vmpath="E:\vhdx"
#VM variables
$domain="contoso.com"
$domainpassword="pass@word1"
$domainadmin="administrator"
$localadminpassword="pass@word1"
$localadmin="administrator"
$contosogateway="192.168.0.1"
$fabrikamgateway="10.0.0.1"

Configuring the Environment

The script configureEnvironment.ps1 will:

· Enable CredSSP on all domain joined Servers

· Configure AD FS, Work Folders and WAP

· Configure the two Windows 8.1 clients (one domain joined and one non-domain joined)

You must run this script from a powershell window with Admin privileges or it will not work.

When the script is finished, you will have an environment that looks like the below

image

Configuring the Script
Servers

The script obtains the list of machines to build the servers from in a csv file called servers.txt.

Here is what the file looks like.

server,name,ip,ip2

WAP,2012R2-WAP.contoso.com,192.168.0.254,10.0.0.10

ADFS,2012R2-ADFS.contoso.com,192.168.0.160

WF,2012R2-WF.contoso.com,192.168.0.170

AD,2013R2-DC.contoso.com,192.168.0.150

Here is the definition of the csv structure

Field

Definition

server

Key value to identify row, do not change

name

FQDN of the machine

Ip

IP address of the VMs first network adapter

Ip2

IP address of the VMs second network adapter (optional)

Clients

The script obtains the list of machines to build the clients from in a csv file called clients.txt.

Here is what the file looks like.

client,name,ip

domainjoined,client1,192.168.0.120

nondomainjoined,client2,10.0.0.20

Here is the definition of the csv structure

Field

Definition

server

Key value to identify row, do not change

name

Bios name of the machine

Ip

IP address of the VMs first network adapter

Ip2

IP address of the VMs second network adapter (optional)

Script Variables

In the script there are also variables that are used for:

  • AD FS Display Name
  • AD FS Service Name
  • AD FS certificate Subject name
  • Work Folders certificate Subject name
  • Address to use for enterpriseregistration
  • Name of Share to create on Work Folders
  • Path of share to create on the Work Folders machine
  • Name of Group to add to Work Folders Share
  • Relying Party Trust name for Work Folders
  • Domain
  • Password to use when exporting and importing certs
  • User name and passwords for:
    • Web Application Proxy
    • Host Machine
    • Contoso admin
    • Non-domain joined client machine
$ADFSdisplayName="Contoso Corporation"
 
$ADFSService="Contoso\ADFSService"
 
$ADFSCertificateSubject="blueadfs.contoso.com"
 
$WFCertificateSubject="workfolders.contoso.com"
 
$EnterpriseRegistrationAddress="enterpriseregistration.contoso.com"
 
$WFShareName="TestShare"
 
$WFSharePath="c:\TestShare"
 
$WFShareGroup="Contoso\Domain Users"
 
$RelyingPartyTrustWFDisplayName="WorkFolders"
$domain="contoso.com"
 
$CertPassword="pass@word1"|ConvertTo-SecureString-AsPlainTextForce
 
$SecurePassword="pass@word1"|ConvertTo-SecureString-AsPlainTextForce
 
$credential=New-Object-TypeNameSystem.Management.Automation.PSCredential-ArgumentList"contoso\administrator",$SecurePassword
 
$WAPPassword  ="pass@word1"|ConvertTo-SecureString-AsPlainTextForce
 
$WAPcredential=New-Object-TypeNameSystem.Management.Automation.PSCredential-ArgumentList"administrator",$WAPPassword
 
$hostpassword=“pass@word1"|ConvertTo-SecureString-AsPlainTextForce
 
$hostcredential=  New-Object-TypeNameSystem.Management.Automation.PSCredential-ArgumentList"hostserver\mbutler",$hostpassword
 
$ndjpassword="pass@word1"|ConvertTo-SecureString-AsPlainTextForce
 
$ndjcredential=  New-Object-TypeNameSystem.Management.Automation.PSCredential-ArgumentList"administrator",$ndjpassword

Setting up AD FS

The setupadfs function will

  • Create the CNAME entries (blueadfs.contoso.com and enterpriseregistration.contoso.com) in the DC via remote powershell
  • Install AD FS via remote powershell
  • Create and install a self-signed SAN certificate for AD FS
  • Create the AD FS Managed Service account
  • Install the AD FS Farm
  • Setup the AD FS Relying Party Trust for Work Folders
  • Grant the AD FS Managed Service account permissions to read the certificates private key
  • Enable device registration
  • Export the created certificate to the host machine

The SAN values for the AD FS certificate are read from the csv file named adfssans.txt. The SAN values for the certificate must contain the AD FS service name and one for enterpriseregistration.

<ADFS Service name>.<domain>

enterpriseregistration.<domain>

The values in the shipped csv are

blueadfs.contoso.com

enterpriseregistration.contoso.com

2012R2-ADFS.contoso.com

Setting up Work Folders

The setupWF function will

  • Create the CNAME entry (workfolders.contoso.com) in the DC via remote powershell
  • Install the AD FS certificate into the Work Folders VM
  • Create and install a self-signed SAN certificate for Work Folders
  • Install Work Folders
  • Create a Sync Share for the group defined in $WFShareGroup
  • Set the Sync Share policies to require encryption and password auto lock
  • Enable SMB access on the Sync Share
  • Bind the created certificate to port 443
  • Setup the Work Folders AD FS URL
  • Export the created Work Folders certificate to the host machine

The SAN values for the Work Folders certificate are read from the csv file named wfsans.txt. The SAN values for the W certificate are read from the csv file named adfssans.txt. The SAN values for the certificate must contain the Work Folders: workfolders.<domain>

The values in the shipped csv are

workfolders.contoso.com

2012R2-wf.contoso.com

Setting up Web Application Proxy

The setupWAP function will

  • Install the AD FS and Work Folders certificates into the WAP VM
  • Install the Web Application Proxy Role configured with AD FS service name and AD FS certificate
  • Add a Web Application Proxy for Work Folders using the Work Folders certificate

The values for the Web Application Proxy setting are obtained from the csv file named webapps.txt. The structure of the file looks like this

App,ExternalURL,BackEndServerURL,ADFSRelyingPartyName,subject

WorkFolders,https://workfolders.contoso.com,https://workfolders.contoso.com,WorkFolders,workfolders.contoso.com

Enterprise Registration,https://enterpriseregistration.contoso.com/EnrollmentServer/,https://enterpriseregistration.contoso.com/EnrollmentServer/,pass-through,blueadfs.contoso.com

Here is the definition of the csv structure

Field

Definition

App

Value to use for the Web App name

ExternalURL

URL to use for the External address

BackendURL

URL to use for the Internal address

AD FSRelyingPartyName

name of the AD FS relying party

subject

The subject of the certificate to be used for the Web Application

Setting up Clients

The setupWorkstation function will

· install the AD FS and Work Folders certificates into the workstation VM

· Disable check for server certificate revocation. This only needed to be done for Work Place join when using self-signed certificates

If the workstation is not domain joined, the function will also update the hosts file on workstation for:

workfolders.<domain>

<ADFS service>.<domain>

Enterpriseregistration.<domain>

And have them point to the IP address of the WAP server.

For our example it will put in the following values:

10.0.0.10 workfolders.contoso.com

10.0.0.10 blueadfs.contoso.com

10.0.0.10 enterpriseregistration.contoso.com

Conclusion

I hope this blog post helps you to get started with AD FS, Work Folders and Web Application Proxy in your test labs and gets you a step closer to a production deployment.

Windows Server Work Folders Resources List

$
0
0

Hi folks,

With the introduction of Work Folders last year, we wanted to make sure we also offer great documentation and supporting content that IT Pros can use to successfully install and manage Work Folders in their enterprise environments.  This effort eventually manifested in a series of blog posts, TechNet articles and videos.

To make sure all of these resources are discoverable and accessible from a single point, we went ahead and created the list of links below.

 

Introduction and Getting Started

-          Introducing Work Folders on Windows Server 2012 R2

-          Work Folders Overview on TechNet

-          Designing a Work Folders Implementation on TechNet

-          Deploying Work Folders on TechNet

-          Work folders FAQ (Targeted for Work Folders end users)

-          Work Folders Q&A

-          Work Folders Powershell Cmdlets

-          Work Folders Test Lab Deployment

-          Windows Storage Server 2012 R2 — Work Folders

 



Advanced Work Folders Deployment and Management

-          Work Folders interoperability with other file server technologies

-          Performance Considerations for Work Folders Deployments

-          Windows Server 2012 R2 – Resolving Port Conflict with IIS Websites and Work Folders

-          A new user attribute for Work Folders server Url

-          Work Folders Certificate Management

-          Work Folders on Clusters

-          Monitoring Windows Server 2012 R2 Work Folders Deployments.

-          Deploying Work Folders with AD FS and Web Application Proxy (WAP)

-          Deploying Windows Server 2012 R2 Work Folders in a Virtual Machine in Windows Azure

 

 

 

Videos

-          Windows Server Work Folders Overview: My Corporate Data on All of My Devices

-          Windows Server Work Folders – a Deep Dive into the New Windows Server Data Sync Solution

-          Work Folders on Channel 9

 

 

We hope you find this list of Work Folders resources useful. Please let us know below if you have any comments or questions

Thanks
Roiy Zysman.


Work Folders for Windows 7

$
0
0

Overview

When we introduced Work Folders in Windows Server 2012 R2, we included support for PCs running Windows 8.1 and Windows RT 8.1. However, we knew that we needed to continue releasing support for other clients, and the number one request was to support the large number of enterprise deployments of Windows 7.

We heard the feedback and we are excited to announce that we have just released the packages of Work Folders for Windows 7 on the Download Center! There are 2 packages:

This blog post will focus specifically on the differences between Work Folders on Windows 7 and Windows 8.1 as well as deployment considerations. You can find more general information on Work Folders here in the Work Folders Overview

What’s the difference between the Windows 7 and Windows 8.1 releases?

Windows 7 is still our most widely deployed operating system, especially in the enterprise, which is the group of customers who have been most interested in Work Folders support on Windows 7. So we created this release focusing on our enterprise customers.

Supported Windows Editions

Given the enterprise focus, the Work Folders for Windows 7 package can be installed only on PCs running the following editions of Windows 7:

  • Windows 7 Professional
  • Windows 7 Enterprise
  • Windows 7 Ultimate

This package can be installed only on these editions of Windows 7, no other operating system is supported by this package. The package also requires Windows 7 Service Pack 1.

For home users with Windows 7 PCs, we recommend upgrading to Windows 8.1.

Setup

To set up Work Folders on Windows 7, the client PC must be joined to your organization’s domain. If not, Setup will fail with the following error:

Policy enforcement

Work Folders provides two device policies that administrators can control. The policies are enforced on the Windows 8.1 clients before data sync is allowed:

  • Encrypt Work Folders All the Work Folders data on a user’s PC will be encrypted using the Windows 8.1 Selective Wipe technology
  • Automatically lock screen, and require a password Applies the following three policies to a user’s PC:
  • Password minimum length is 6 characters
  • Device idle auto lock is 15 minutes or less
  • Logon retry is set to 10 or less

The policy settings are not configurable, and they are enforced on the devices running with Windows 8.1 through the EAS Engine.

Work Folders on Windows 7 can’t enforce the lock screen and password policy due to missing feature (EAS Engine) support in the operating system. This can be easily mitigated with Group Policy to enforce password policies on their domain-joined PCs. Since Work Folders on Windows 7 is supported only on domain-joined PCs, you (as the admin) still have control over the password policies of all your Work Folders users.

You should continue using Group Policy to manage password policies for all the domain-joined PCs. For PCs and devices that aren’t joined to a domain (Windows 8.1 devices only), Work Folders will enforce its password policy as set on each sync share.

To do so, you’ll need to run the Set-SyncSharecmdlet to add the domain in which all of your Windows 7 PC computer accounts are located to a domain-exclusion list. We describe how to do that in the Server Configuration section below.

If you use the Work Folders password policy but do not configure the excluded domain list on the server, the user will see the following error during Work Folders setup:

Encryption is different on Windows 7, as the Windows 8.1 Work Folders encryption mechanism (selective wipe) is not available. On Windows 7, the files in Work Folders are encrypted using EFS, which does not have remote wipe capability.

Status notification area of the taskbar

On Windows 8.1 clients, users can view the sync status in the File Explorer status bar, and are notified of sync issues through the Action Center. On Windows 7, Work Folders can’t integrate into Windows Explorer and the Action Center, so we added a Work Folders icon to the notification area of the taskbar.

The Work Folders taskbar icon shows sync status, and also a convenient menu option to open Work Folders in Windows Explorer. The icon by default will only show notifications, and is not present on the taskbar. A user can choose to always show the icon by opening Control Panel, searching for “notification” and then using the Notification Area Icons Control Panel item, as shown below.

Server configurations

As mentioned above in the Policy enforcement section, if the administrator wants to enforce Work Folders password policies on Windows 7 PCs, the computer accounts must be in an excluded domain list. An administrator can configure the excluded domain list by using the following cmdlet:

Set-SyncShare <share name> -PasswordAutolockExcludeDomain <domain list>

For example, you can use the following cmdlet to exempt all computer accounts (this doesn’t apply to user accounts) of the contoso.com domain from the Work Folders password policy for the FinShare sync share:

Set-SyncShare FinShare -PasswordAutolockExcludeDomain “Contoso.com”

In this example, PCs in the Contoso.com domain (running Windows 7 or Windows 8.1) receive password policies from Group Policy – not from Work Folders because the domain is excluded from the Work Folders PasswordAutolock policy. Windows 8.1 PCs that aren’t joined to the domain receive Work Folders password policies, if set on the sync share – not from Group Policy because Group Policy applies only to domain-joined PCs.

Each user can be given permission to sync with a single sync share, though they can have a mix of Windows 8.1 and Windows 7 PCs that sync with this share.

Upgrade or migration

When it is the time to upgrade or migrate a Windows 7 PC to a newer version, the expected behavior is listed below:

  • Windows 7 -> Windows 8: Sync will stop, and the Work Folders Control Panel item will show “Can’t use Work Folders on this version of Windows” since there is no Work Folders support on Windows 8. Ideally, the user would install the Windows 8.1 update, and then set up Work Folders again.
  • Windows 7 -> Windows 8 -> Windows 8.1: User needs to set up Work Folders again. If data is migrated, see the Known Issues section of this document.
  • Windows 7 -> Windows 8.1: User needs to set up Work Folders again. If data is migrated, see the Known Issues section of this document.
  • Windows 7 -> Windows 8.1 using User State Migration Toolkit (USMT), the expected user experience will be:
  • Work Folders partnership configuration will be migrated.
  • Work Folders data will not be migrated. (i.e. no files that have yet to be synced are migrated to the new client)
  • Work Folders is shown in File Explorer under Favorites, but isn’t listed under “This PC” as is the case when setting up the Work Folders partnership on Windows 8.1.
  • The Work Folders configuration is migrated, and files are synced from the sync server after the user signs on.

Known issues

  • In the case where the user upgrades from Windows 7 to Windows 8.1, and the data is migrated without the partnership information, if the local folder storing Work Folders (by default, C:\Users\<username>\Work Folders) was encrypted on Windows 7, the same path can’t be used again on the Windows 8.1. This is because the different encryption mechanisms used on Windows 7 and Windows 8.1. There are two workarounds:
  • The user can open File Explorer in Windows 8.1, right click the folder storing Work Folders and then click Properties. Click Advanced, and then clear the “Encrypt contents to secure data” checkbox. Click OK, and then click “Apply changes to this folder, subfolders, and files”.
  • The user can choose a different path for the Work Folders, and optionally delete the old folder. The user must make sure all the content has synced to the server before removing the old Work Folders path.
  • If your environment requires Active Directory Federation Services (AD FS) and uses form-based authentication, the client PCs must use Internet Explorer 9, 10 or 11. There is an issue with Internet Explorer 8, where the user can’t authenticate against the server.
  • If your environment uses IPSec, see Knowledge Base article 2665206. Without this hotfix, Work Folders client might experience slow sync performance in some environments that use IPSec.
  • If you are configuring Work Folders by using Group Policy, the Work Folders Group Policy template is included with Windows Server 2012 R2. Although the description text indicates that it only applies to Windows 8.1 PCs, the policy settings can also configure Windows 7 PCs that have Work Folders installed.
  • On Windows 7, the Work Folders shortcut is added to the user’s Favorites folder in Windows Explorer. If the Favorites folder is redirected to a network share, the shortcut for Work Folders will not be present. This is because the Work Folders path is local to a client machine, so the shortcut may not have any meaning on other client machines when presented through a network share.
  • If the user migrates from Windows 7 to Windows 8.1 using USMT, and chooses to migrate the settings (which includes the user partnership), the Work Folders data will not be migrated. After user logs on the new machine, the partnership will be established, and data will synced down to the machine. The shell namespace under “This PC” for Work Folders is not created. To get the shell namespace under “This PC” for work folders, you can simply click “Stop Work Folders” in the Work Folders Control Panel, and then set up Work Folders again. This will allow the namespace to be created as part of the partnership creation.
  • If the client has installed a localized (non-English) version of the Work Folders, after migration, the Work Folders shortcut under the favorite folder will be shown as English.

So that’s our Windows 7 app for Work Folders. Let us know what you think, and we’ll keep working on clients for other popular platforms and update when they’re ready.

Thanks,

Jian Yan and the Windows 7 Work Folders team

We'll be seeing you at TechEd

Set FSRM Storage Report Limits using PowerShell

$
0
0

Windows Server 2012:  FSRM Storage Report Limits

Configuring Report Limit Settings Using PowerShell, by Stan Symms

Hello there,

  My name is Stan Symms, and I'm a security architect in the ISRM SAFE-T team at Microsoft here with some tips for those who need to build File Storage Reports using FSRM and have run into report limits..   

Windows Server’s File Storage Resource Manager comes with a set of Storage Reporting functions that allow a user to generate a variety of valuable reports concerning file shares.  Default values for report limits, e.g., max files/report, are preconfigured to sample size values only.  Unfortunately, Prior to Server 2012 R2, File Storage Report limits weren’t accessible from the FRSM UI. With no UI to manage them in FSRM, the user can be very confused by the output of storage reports when they aren’t aware of the existence of reporting limits.   Worse still, it has been difficult to find documentation explaining what these limits are and how to reset them yourself.  Hence, this article is being provided to improve that situation.

The goal of this post is to provide you with the information you need to:

  1. Be aware of the various default reporting limits within the FRSM Reporting System as of Windows Server version 2012.

  2. Learn how to easily configure those limits using Windows PowerShell to values which will enable you to get the information you need.

What’s not covered:   This post will not

  1. Teach you how to install or use FSRM Reporting

  2. Teach you how to write PowerShell scripts

With that, let’s get started.  In the next section I will set expectations with you on what you need to have done prior to using the instructions and other pre-requisites such as required permissions and settings.


 

PREPARATIONS AND ASSUMPTIONS

At this point I am assuming you have installed the File Storage Resource Manager role into your Windows Server 2012 instance and all currently published roll-up updates have been installed. 

Additional assumptions before I begin include:

  1. You are fully versed in the operation of the FSRM role in Windows Server 2012

  2. You understand how to execute PowerShell commands

  3. You are logged into the server with Local Administrator privileges


FSRM REPORTING LIMITS DEFINED

The following table defines the supported reporting limit constants FSRM uses to constrain report output for the purposes of mitigating performance impacts from excessively large data sets:

The user is free to set any one of these constants to match their report output needs.  

The means to do so is consistent for each and will be explained in the Step By Step section.

REPORTING LIMIT CONSTANT DEFAULT VALUES

Depending on your version of Windows Server, each of the report limit constants described above comes with a default hardcoded value.  

The current values will always be displayed in your PowerShell window when you set any constant to a new value.

NOTE:   Windows Server 2012 R2 allows configuration of these values within the Storage Report Task Properties dialogue of FSRM, based on the report type selected, by clicking on the “Edit Parameters” button as shown below:

STEP BY STEP INSTRUCTIONS

The steps to change the current value of a FSRM Report Limit Constant are quite simple and easy.   

Once you have identified the particular constant you wish to change, it will be used in the following command, replacing the <> parameter.

Set-FsrmSetting –<ConstantName> <NewValue> –PassThru

To execute this command with your parameter, I will use the “MaxFilesPerPropertyValue” as an example, and set the new value to 1 Million. 

The steps to do this are as follows:

1) Go to the START screen

2) Locate the PowerShell icon and Right-Click it.   On the Windows Status Bar at the bottom of the screen, choose “Run As Administrator” as shown in the picture below:

3) When the PowerShell Command Window Opens, type the following command:

   Set-FsrmSetting –ReportLimitMaxFilesPerPropertyValue 1000000 –PassThru

This will enable a report to contain up to 1 million files for each property value reported.

If you have access to this command with your current login, the command will execute and provide a summary of the current FSRM Parameter Values as shown below.  

If you do NOT have access to the command, you will see the message “Access Denied”

NOTE:  If you get an error message indicating the PowerShell command you are attempting to execute is not digitally signed or running scripts is disabled, see the PS command Help set-executionpolicy for information on how to resolve the issue.

PARAMETER NAMES CORRESPONDING TO EACH REPORT LIMIT CONSTANT

Notice the Parameter Name in the command is a VARIATION of the actual Constant Name, not the name itself.  

The constant name is defined in the FSRM API that is being called by the PowerShell cmdlet. 

Thus, instead of using “FsrmReportLimit_MaxFilesPerPropertyValue”, we used “ReportLimitMaxFilesPerPropertyValue”.  

Note: You MUST use the same convention for the other constants when specifying the associated Parameter Name in the command.  

I have included a table with the Parameter Name translations for your convenience below:

Reporting Limit Constant Name

Parameter Name

FsrmReportLimit_MaxFiles

ReportLimitMaxFile

FsrmReportLimit_MaxFileGroups

ReportLimitMaxFileGroup

FsrmReportLimit_MaxOwners

ReportLimitMaxOwner

FsrmReportLimit_MaxFilesPerFileGroup

ReportLimitMaxFilesPerFileGroup

FsrmReportLimit_MaxFilesPerOwner

ReportLimitMaxFilesPerOwner

FsrmReportLimit_MaxFilesPerDuplGroup

ReportLimitMaxFilesPerDuplicateGroup

FsrmReportLimit_MaxDuplicateGroups

ReportLimitMaxDuplicateGroup

FsrmReportLimit_MaxQuotas

ReportLimitMaxQuota

FsrmReportLimit_MaxFileScreenEvents

ReportLimitMaxFilesScreenEvent

FsrmReportLimit_MaxPropertyValues

ReportLimitMaxPropertyValue

FsrmReportLimit_MaxFilesPerPropertyValue

ReportLimitMaxFilesPerPropertyValue

4) Replace the parameter name in the example command with the one you wish to change, and provide a new value in place of mine, and you have it.  

Issue the new command in the PowerShell command window to match the structure:

    Set-FsrmSetting –<ConstantName> <NewValue> –PassThru

SUMMARY

Once you have executed a command successfully, you should see a new summary of active FSRM Reporting Constants displayed with your new value associated to the parameter you specified.   If you got an error message instead, it’s likely you misspelled some part of the command or parameters, supplied an invalid value, or do not have the appropriate privileges to execute FSRM commands.

For more information about FSRM PowerShell commands, visit this TechNet Link: 

http://technet.microsoft.com/en-us/library/jj900651.aspx

You can review the complete list of Parameters used in the “Set-FsrmSetting” PowerShell cmdlet at this TechNet link: 

http://technet.microsoft.com/en-us/library/jj900644.aspx  

Thanks for reading and enjoy your limitless reporting :)

Announcing the Data Classification Toolkit for Windows Server 2012 R2!

$
0
0

We are excited today to announce the release of the Data Classification Toolkit for Windows Server 2012 R2! The Data Classification Toolkit for Windows 2012 R2 is designed to help you to:

  • Identify, classify, and protect data on file servers in your private cloud.
  • Provide support for deployments of Windows Server 2012 R2, as well as for mixed deployments of Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2 SP1.
  • Easily configure default Central Access Policy across multiple servers.
  • Build and deploy policies to protect critical information in a cost-effective manner.

In this release, we have enabled support for Windows Server 2012 R2 (in addition to supporting Windows Server 2012 and Windows Server 2008 R2 SP1) and fixed a number of bugs:

  • The ‘Import’ and ‘Deploy’ pathways through the GUI have been updated to behave the same as Import-FileClassificationPackage: if a given baseline provides configuration for RMS tasks, these are used by default.
  • Downleveling between versions of Windows Server has been improved and expanded to take in to account differences between Windows Server 2012 R2 and Windows Server 2012/2008 R2.
  • The toolkit’s SCOM query has been updated to automatically use the management pack for Windows Server 2012 R2 FSRM.
  • The provided query for querying the Microsoft Assessment and Planning toolkit has been updated to account for the change in schema in version 8/9.

Streamline your data compliance efforts. The toolkit provides support for configuring data compliance on file servers running Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2 SP1 to help automate the file classification process, and make file management more efficient in your organization.

Simplify your central access policy configuration experience. The latest version of the toolkit allows you to provision and standardize central access policy across a forest and apply default access policies on your private cloud file servers. The toolkit also provides tools to provision user and device claim values based on Active Directory Domain Services (AD DS) resources to help simplify configuring Dynamic Access Control. You can also easily track and report existing Central Access Policy on file shares.

Manage File Classification like a Pro. The toolkit offers a GUI and Windows PowerShell cmdlets to help you configure, update, and monitor your deployments of Windows Server’s File Classification Infrastructure the way you want!

Use the Data Classification Toolkit to help your organization successfully plan and maintain data classification programs in these critical areas:

  • Identifying applicable IT GRC sensitive documents.
  • Defining the corresponding classification and protection policies.
  • Encrypting sensitive documents to protected them from unauthorized breaches.
  • Preserving evidence that demonstrates the implementation of effective controls.

The End is Nigh (for FRS)

$
0
0

Hi folks, Ned here again. At TechEd, I officially announced the end of the File Replication Service. For those that missed the event, I repeat:

We are removing FRS from Windows Server

Today I’ll talk about what this means and how to get ready. We want this to be as easy as possible and I welcome any conversations that help you move forward with migrating to DFSR for SYSVOL replication.

image
I’m ready…

Deprecation? Speak plainly!

FRS and DFSR both asynchronously replicate content sets of file data, and are included with Windows Server at no extra cost. Microsoft introduced the File Replication Service in Windows 2000 Server. We later replaced it with the Distributed File System Replication in Windows Server 2003 R2. Starting in Windows Server 2008, DFSR gained the ability to replicate SYSVOL on domain controllers and became the preferred engine.

With Windows Server 2008 R2, we deprecated FRS and reduced its replication capability to SYSVOL alone. You got FRS only if you created a new domain with a Windows Server 2003 or Windows 2000 domain functional level.

We also ensured that starting in Windows Server 2012, the default domain functional level for new domains was Windows Server 2012, to ensure that you never setup FRS in the first place. In Windows Server 2012 R2, you cannot even select a functional level that uses FRS anymore when creating a domain through Server Manager or Windows PowerShell.

image
Why would you want a 2003 DFL? You miss so much other goodness!

Furthermore, TechNet states our FRS position for each OS:

We published the first article five years ago.

Deprecation simply means a product has reached obsolescence, often with a superseding feature. You should stop relying on it to exist in the future, stop expecting functionality changes, and stop expecting non-security bug fixes. After deprecation and enough warning time - at least one full OS release - we reserve the right to remove the feature.

Why bother?

Long before I became a Program Manager, I wrote an explanation of why FRS was inadequate and why you should shift to its replacement, DFSR. Those justifications are just as true today. Nevertheless, the biggest reason is the implicit one: why do you think Microsoft spent years and money writing a no-extra-charge replacement to its predecessor, unless the predecessor was fundamentally flawed?

We have finally reached the phase where continuing to “support” FRS is impossible; it’s a bit of a stretch to even say we’re supporting it now, as you cannot get bug fixes for it. DFSR is vastly more capable, reliable, and scalable. Most importantly, there is only one OS that requires FRS - and that OS is going away in 2015.


Time marches on

With the end of support Windows Server 2003 in July 2015, there will no longer be any technical requirement to keep FRS around. All supported OSes will happily replicate SYSVOL with DFSR. Thus ends the legacy. Whatever server operating system we ship after July 2015 may no longer include the FRS binaries. You will not be able to promote that OS to be a domain controller in a domain that is still running FRS for SYSVOL, thereby blocking upgrades until you migrate to DFSR.

This is not an assurance that we are removing FRS from the next version of Windows Server. Some OS we release after July 2015 will not have FRS. Just plan for the inside timeline and you cannot go wrong - migrate to DFSR before end of Windows Server 2003 support.

So now what?

Fortunately, moving from FRS to DFSR is a simple process that most customers perform in minutes. This migration procedure has been around for six years and after all that time, you only need to review one KB article. Our process is solid and tested, with no migration bugs found in DFSR itself after Windows Server 2008.

In a separate blog post, I’ve outlined a DFSR “streamlined migration” that should remove some of the angst you might have from staring at a 52-page mega-super-complete migration guide. It’s possible to perform the entire migration with a single command - really!

Thousands of customers have migrated already and thousands more have deployed new domains with DFSR. Get your team onboard and knock this out - it won’t take long.

image
Together we can double punch legacy FRS!

Until next time,

- Ned “voodoo doll” Pyle

Viewing all 268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>