Quantcast
Channel: Storage at Microsoft
Viewing all 268 articles
Browse latest View live

Streamlined Migration of FRS to DFSR SYSVOL

$
0
0

Hi folks, Ned here again. When telling people about the coming removal of FRS from Windows Server, the main response is usually:

“Sure, I have occasional problems with FRS and know that I should upgrade to DFSR, but who has the time for a huge migration? That guide is intimidating!”

I hear you on the guide; it is 52 pages of verbosity and completeness. This implies that migration normally takes weeks or months though, which is untrue. With smaller domains, it should take a few minutes, and with larger ones, a few hours or days at most. Migrations are staged, with no interruption to users, and can be rolled back until completed.

Today I’ll talk about how to migrate using one of three streamlined methodologies:

  • Quick Migration (where you don’t know if your domain controllers are healthy, and you want a rollback option)
  • Express Migration (where you are sure that your domain controllers are healthy, and you want a rollback option)
  • Hyper Migration (where you want to migrate with the minimum steps and are confident of domain controller health)

Before I start: we still recommend that you review the full migration guide to understand all the requirements and how DFSR SYSVOL migration works. It might add a few hours of reading, but it’s all in the name of prudence and understanding. This blog post doesn’t really explain anything; it just punches you through to the other side.

Still here? Ok, let’s get started.

Assumptions

1. You already know your way around Active Directory and SYSVOL. If you are new to these technologies, stop reading this blog post and use the DFSR SYSVOL migration guide for your migration instead. It’s much more comprehensive.

2. You already deployed at least Windows Server 2008 to all domain controllers in the domain and there are no remaining Windows Server 2003 or Windows Server 2003 R2 domain controllers.

3. If using Windows Server 2008 or Windows Server 2008 R2 domain controllers, you have deployed the latest version of Robocopy.exe. The latest WS2008 version as of this writing is KB2644882; the latest WS2008 R2 is KB2646535). This is an optional but recommended step; it ensures efficient preseeding of data during the migration, but migration still works even without them. If using Windows Server 2012 or later operating systems, there’s no need to update robocopy. For more information, review KB2567421.

4. You already raised the domain functional level to at least Windows Server 2008, using Domain.msc or the Set-ADDomainMode Windows PowerShell cmdlet.

image

image

Some handy advice before you begin

For faster migration performance, be aware of how to make AD performance faster - Repadmin.exe /syncall and change notification are your friends, but like the robocopy patches above, are optional. DFSR migration only goes as fast as AD replication. For instance, this command will force push replication of all partitions while ignoring the schedules (this is a rather sledgehammer example):

Repadmin /syncall /force /APed

Furthermore, DFSR SYSVOL only replicates when AD has an open schedule (DFSR does not know about change notification). Ensure you have configured AD site links for continuous replication, if you want DFSR to replicate as fast as change notification.

Finally, DFSR reads and writes its new migration state every 5 minutes on each DC. You can speed this up by using Dfsrdiag.exe pollad or the new Update-DfsrConfigurationFromAD Windows PowerShell cmdlet (if all your DCs are running Windows Server 2012 R2). For the latter, a slick way to update every DC in the domain at once is to combine with the AD cmdlets (this sample is a single wrapped line):

Get-ADDomainController -Server corp.contoso.com -Filter * | % { Update-DfsrConfigurationFromAD -ComputerName $_.name -Verbose }

Once you start the migration, running repadmin forced syncs and dfsrdiag forced polls after each migration step will greatly speed up the processing. Or you can just wait and let things happen naturally - that’s fine too.

Quick Migration

In this case, the health of AD and SYSVOL on all domain controllers is not known. For instance, you are not using System Center Operations Manager to monitor your domain controllers for AD replication, SYSVOL availability, and free disk space.

The goal of the Quick Migration scenario is to test the conditions of the domain controllers, then migrate SYSVOL to DFSR, with the ability to roll back during the process.

1.Ensure free disk space - The DFSR migration process copies the contents of SYSVOL to a parallel folder called SYSVOL_DFSR, and then shares out that copy during the Redirection phase. This means that on the volume where your SYSVOL exists on domain controllers - typically the C: drive - you need at least as much free space as the size of the current SYSVOL folder, plus a 10% fudge factor. For instance., if your current SYSVOL folder is 2GB (an unusually large SYSVOL), you should ensure that at least 2.2 GB disk space is free on the same volume. Most SYSVOL are only a few hundred MB or less.

An easy way to determine the free disk space on a bunch of remote DCs is with Psinfo.exe -d. Look here for more info. The WMI Win32_LogicalDiskclass is also a possibility, such as through Windows PowerShell:

Get-WmiObject -Class win32_logicaldisk -ComputerName srv01,srv02,srv03 | FT systemname,deviceid,freespace -auto

You can get fancier here, first looking on each computer to decide which volume hosts SYSVOL and comparing sizes and such, but this is the quick migration guide!

Note: you can greatly decrease the size of your SYSVOL by preventing legacy ADM replication using KB813338.A hundred group policies with 50 registry settings apiece is unlikely to exceed 5MB total when creating group policies using Windows Vista or later. The ADMX central store and alternatives are available for servicing.

2.Ensure correct security policy - You must ensure that the built-in Administrators group has the “Manage Auditing and Security Log” user right on all your domain controllers. This is on by default, so if it’s not set, someone yanked it. Microsoft does not support removing that, no matter what you may have read elsewhere. To validate, examine the group policy applied to your domain controllers by using Gpresult.exe. For more info, examine KB2567421.

image

3. Ensure AD replication is working - The DFSR migration depends entirely on each domain controller receiving and sending state changes via AD replication. There are many ways to examine AD health, but the easiest is probably the Active Directory Replication Status Tool. Install the utility and scan your domain for errors; if there are problems, fix them and then continue. Don’t attempt a DFSR migration unless all your domain controllers are replicating AD correctly.

image

Ideally, when you set “Errors Only” mode on, it looks like this:

image

image

4. Ensure SYSVOL is shared - DFSR migration naturally depends on SYSVOL itself; it must already be shared and the DC must be advertising and available, or migration at each stage will never complete. The simplest way to check all your domain controllers is with the Dcdiag.exe command using two specific tests:

Dcdiag /e /test:sysvolcheck /test:advertising

Don’t attempt a DFSR migration unless all your domain controllers are passing the connectivity, SYSVOL, and advertising tests with no errors.

They should look like this:

image

5. Migrate to Prepared State - Now you will migrate to the Prepared state, where both FRS and DFSR are replicating their own individual copies of SYSVOL, but the FRS copy mounts the SYSVOL and Netlogon shares. On the PDC Emulator domain controller, run (as an elevated domain admin):

Dfsrmig /setglobalstate 1

Now you wait for this AD value on the PDCE to converge on all domain controllers, then for DFSR to switch to Prepared state on each domain controller and update AD, and finally for that value to replicate back to the PDCE. Use the following command to see progress:

Dfsrmig /getmigrationstate

When all DCs are ready, the output will look like this:

image

As I mentioned in the advice section, you can speed this processing up with faster AD replication and DFSR polling.

6. Migrate to Redirected State - Now you will migrate to the Redirected state, where both FRS and DFSR are replicating their own individual copies of SYSVOL, but the DFSR copy mounts the SYSVOL and Netlogon shares. On the PDC Emulator domain controller, run (as an elevated domain admin):

Dfsrmig /setglobalstate 2

Now you wait for this AD value on the PDCE to converge on all domain controllers, then for DFSR to switch to Redirected state on each domain controller and update AD, and finally for that value to replicate back to the PDCE. Use the following command to see progress:

Dfsrmig /getmigrationstate

When all DCs are ready, the output will look like this:

image

7. Migrate to Eliminated State - Finally, you will migrate to the Eliminated state, where DFSR is replicating SYSVOL and FRS is removed. Unlike the Prepared and Redirected states, there is no way to go backwards from this step - once executed, FRS is permanently stopped and cannot be configured again. On the PDC Emulator domain controller, run (as an elevated domain admin):

Dfsrmig /setglobalstate 3

Now you wait for this AD value on the PDCE to converge on all domain controllers, then for DFSR to switch to Eliminated state on each domain controller and update AD, and finally for that value to replicate back to the PDCE. Use the following command to see progress:

Dfsrmig /getmigrationstate

When all DCs are ready, the output will look like this:

image

Your migration is complete.

Express Migration

In this case, the health of AD and SYSVOL on all domain controllers is known to be healthy. For instance, you are using System Center Operations Manager to monitor your domain controllers and ensure that AD replication, SYSVOL availability, and free disk space are all nominal.

The goal of the Express Migration scenario is to migrate SYSVOL to DFSR with the ability to roll back during the process.

1.Ensure correct security policy - You must ensure that the built-in Administrators group has the “Manage Auditing and Security Log” user right on all your domain controllers. This is on by default, so if it’s not set, someone yanked it. Microsoft does not support removing that, no matter what you may have read elsewhere. To validate, examine the group policy applied to your domain controllers by using Gpresult.exe. For more info, examine KB2567421.

2. Migrate to Prepared State - Now you will migrate to the Prepared state, where both FRS and DFSR are replicating their own individual copies of SYSVOL, but the FRS copy mounts the SYSVOL and Netlogon shares. On the PDC Emulator domain controller, run (as an elevated domain admin):

Dfsrmig /setglobalstate 1

Now you wait for this AD value on the PDCE to converge on all domain controllers, then for DFSR to switch to Prepared state on each domain controller and update AD, and finally for that value to replicate back to the PDCE. Use the following command to see progress:

Dfsrmig /getmigrationstate

When all DCs are ready, the output will look like this:

image

As I mentioned in the advice section, you can speed this processing up with faster AD replication and polling.

3. Migrate to Redirected State - Now you will migrate to the Redirected state, where both FRS and DFSR are replicating their own individual copies of SYSVOL, but the DFSR copy mounts the SYSVOL and Netlogon shares. On the PDC Emulator domain controller, run (as an elevated domain admin):

Dfsrmig /setglobalstate 2

Now you wait for this AD value on the PDCE to converge on all domain controllers, then for DFSR to switch to Redirected state on each domain controller and update AD, and finally for that value to replicate back to the PDCE. Use the following command to see progress:

Dfsrmig /getmigrationstate

When all DCs are ready, the output will look like this:

image

4. Migrate to Eliminated State - Finally, you will migrate to the Eliminated state, where DFSR is replicating SYSVOL and FRS is removed. Unlike the Prepared and Redirected states, there is no way to go backwards from this step - once executed, FRS is permanently stopped and cannot be configured again. On the PDC Emulator domain controller, run (as an elevated domain admin):

Dfsrmig /setglobalstate 3

Now you wait for this AD value on the PDCE to converge on all domain controllers, then for DFSR to switch to Eliminated state on each domain controller and update AD, and finally for that value to replicate back to the PDCE. Use the following command to see progress:

Dfsrmig /getmigrationstate

When all DCs are ready, the output will look like this:

image

Your migration is complete.

Hyper Migration

In this case, the health of AD and SYSVOL on all domain controllers is known to be healthy. For instance, you are using System Center Operations Manager to monitor your domain controllers and ensure that AD replication, SYSVOL availability, and free disk space are all nominal.

The goal of the Hyper Migration scenario is to migrate SYSVOL to DFSR with the fewest steps and no ability to roll back the migration process once commenced.

1. Ensure correct security policy - You must ensure that the built-in Administrators group has the “Manage Auditing and Security Log” user right on all your domain controllers. This is on by default, so if it’s not set, someone yanked it. Microsoft does not support removing that, no matter what you may have read elsewhere. To validate, examine the group policy applied to your domain controllers using Gpresult.exe. For more info, examine KB2567421.

2. Migrate to Eliminated State - DFSR does not mandate that you must migrate through each stage at a time. If you wish, you can trigger migrating all the way to the Eliminated state immediately, where DFSR is replicating SYSVOL and FRS is removed. Unlike the incremental Prepared and Redirected states, there is no way to go backwards from this step - once executed, FRS is permanently stopped and cannot be configured again. On the PDC Emulator domain controller, run (as an elevated domain admin):

Dfsrmig /setglobalstate 3

image

Now you wait for this AD value on the PDCE to converge on all domain controllers, then for DFSR to switch to Eliminated state on each domain controller and update AD, and finally for that value to replicate back to the PDCE. This will happen for the Prepared, Redirected, and Eliminated stages sequentially, with no need to run each command. Use the following command to see progress:

Dfsrmig /getmigrationstate

When all DCs are ready, the output will look like this:

image

Your migration is complete.

Naturally, anything faster than Hyper Migration requires your own Schwarzschild Wormhole.

Final Notes

Since you are probably new to SYSVOL using DFSR - and maybe DFSR in general - I highly recommend you review these two KB articles:

They cover the scenario where DFSR may pause replication - due to a power failure or hardware problem - and wait for you to manually resume it. This initially leads to group policy not replicating, but more importantly, eventually leads to a quarantined server. With our latest hotfixes and operating systems, Microsoft recommends disabling this pausing behavior and allow DFSR to resume. If using Windows Server 2008 R2 or Windows Server 2012, use KB2846759 to always automatically resume replication (see section “How to disable the Stop Replication functionality in AutoRecovery”). It’s a simple registry entry, and you can deploy it manually or using Group Policy Preferences. Windows Server 2012 R2 and later default to auto-resuming, so there is nothing to do there.

We went from 52 pages down to a handful, and that was with plenty of screenshots and blather. Now you are running DFSR for SYSVOL and prepared for the future of Windows Server.

Until next time,

- Ned “Einstein–Rosen bridge” Pyle


Using ADFS authentication for Work Folders

$
0
0

Michael has posted a blog post on how to build a topology with Work Folders and ADFS, it provides really good step by step guide, as well as the scripts. I want to build on that, to show you some insight of Work Folders using ADFS authentication.

Overview

Work Folders supports 2 different authentications:

  • Windows integrated authentication
  • ADFS using OAuth 2.0 (ADFS using Windows Server 2012 R2)

When the Work Folders server is configured to use Windows integrated authentication, the client will use Kerberos when the device is logged on with the user domain credentials and connected on the corpnet, if the machine is connected over the internet, or logged on using a local account, Work Folders will prompt for user credentials using Digest.

When the Work Folders server is configured using ADFS, the admin needs to provide the ADFS Url (which is the Url for the federation service name). Admin needs to configure the Work Folders relying party on the ADFS server first, this is covered in this blog post, and I’ll skip the details here.

Work Folders admin can configure the authentication method in the server setting page, or running the following cmdlet:

Set-SyncServerSetting -ADFSUrl "<Url for federation service>"

ADFS authentication workflow

When the Work Folders server is configured using ADFS, the client needs to authenticate with the ADFS server, and obtain a token which will then be provided to the Work Folders server to get access. The diagram below shows the sequence:

clip_image002

  1. Client request sync
  2. Work Folders server asks for ADFS access token, also pass back ADFS Url
  3. Client request access token from ADFS server
  4. Based on the policy, ADFS prompts user for auth page
  5. Client sends credentials
  6. ADFS server gives back access token
  7. Client request sync using the access token
  8. Work Folders server will impersonate the user using the token (just like other auth)

Workplace join

Workplace join is a new feature introduced in Windows Server 2012 R2. The details about the feature can be find here: http://technet.microsoft.com/en-us/library/dn280945.aspx.

The benefit of using ADFS authentication in Work Folders is to allow administrators take advantage to enforce device registration before accessing corporate resources, and/or using multi-factor authentication for the access. These benefits are supported by Workplace join on the client and Server 2012 R2 ADFS. To do so, the ADFS admin configures the Work Folders relying party, and specify if the device is required to be registered (Workplace joined) to access that resource.

In addition to the benefit, the Work Folders client authentication frequency is also different when the device is Workplace joined. The access token (acquired in step #6 illustrated in the above Workflow) has a lifetime of 8 hours. When the access token is expired, the user will be prompt for authentication, and sync stops. If the device is Workplace joined, when the ADFS server gives the access token, it will also send a refresh token. The lifetime of refresh token is configurable, and has a default value of 7 days. With a valid refresh token, user doesn’t need to be prompt for credentials, Work Folders client will take the refresh token and authenticate with the ADFS server to get the access token. When the refresh token expires, user will then be prompted, and authentication workflow cycles again.

Workplace join was introduced to Windows 8.1, and just released for Windows 7. You can find more here: http://technet.microsoft.com/en-us/library/dn609827.aspx

Suppress credential prompt for domain joined client

As a background process (sync), user just expects it works. With ADFS, domain joined devices can take advantage of the ADFS support of the Windows authentication. For details about the ADFS user agent see here: http://technet.microsoft.com/en-us/library/dn280949.aspx

With Work Folders, you can add the Work Folders as the supported user agent by running the following cmdlet on the ADFS server:

Set-AdfsProperties -WIASupportedUserAgents ((Get-AdfsProperties).WIASupportedUserAgents + 'MS_WorkFoldersClient')

This cmdlet adds the “MS_WorkFoldersClient” to a list which ADFS recognizes, and will allow the application (in this case it is the Work Folders) to use Windows Integrated auth to authenticate using the logged on user credentials. In short, if the user is logged on the device using the domain credentials, and the device is connected to the corpnet, sync will not require user to enter credentials.

Conclusion

I hope this blog post helps you to understand how ADFS authentication is supported by Work Folders. The supported ADFS server release by Work Folders is Windows Server 2012 R2. Want to try it out? You will find this setup guide blog post come handy. As always, if you have questions not covered here, please raise it in the comments so that I can address it with upcoming postings.

Moving users to different Sync Shares

$
0
0

User folders are associated with a sync share, and there are times you will need to move a user to different sync shares. For example, maybe the user changed jobs from HR to Finance, which has its own sync share. The process is pretty straightforward, by changing the user’s security memberships.

This blog post goes into the detail on how this works, as well as how to assign security groups to sync shares.

How users are given access to sync shares

Access to sync shares is managed by the assignment of users to a specific sync share. Typically, you assign one or more security groups to a sync share. As a result, the users that belong to the assigned security groups are allowed to sync to that particular share.

To assign security groups to a sync share, use the following cmdlet:

Set-SyncShare –name <sharename> -user <domain\SecurityGroupName>

You can also do so in the Work Folders page in Server Manager, by right-clicking the sync share, and then clicking Properties:

clip_image002

When you select a sync share in Server Manager, the Users tile shows all the users who belong to the assigned security group for that sync share. This provides an easy way to see all the users syncing to the share. You can right-click a user and then click Properties to find detailed information about a particular user.

Note We don’t recommend using built-in groups, such as “Domain Users”, for assigning users to sync shares due to the complexity it can create later if you want to move users to other sync shares. Additionally, the Server Manager Users tile won’t show users from built-in groups. You can do the assignment through Windows PowerShell, though once again, we don’t recommend it.

Moving users to a different sync share by assigning them to a different security group

When you need to move a user from one sync share to another (on the same server or on different servers), you can simply move the user to a different security group.

For example, let’s say that there are two sync shares on the server: Finance Share and HR Share. Each share is associated with a security group: Finance Users, and HR Users. A user (Amy) changes jobs from HR to Finance. You then update the group membership for Amy, moving her from HR Users to Finance Users.

Amy has already Work Folders configured on her devices. Without her doing anything, the next time any of her device syncs, the old sync share (HR Share) will reject the sync request because she’s no longer a member of the allowed security group. After the sync request is rejected, the client will go through the discovery process to find the new sync share – which is what we talk about next…

Discovery process

There are two main phases in discovery: Active Directory (AD) discovery, and local discovery.

AD discovery

AD discovery refers to the process of a sync server querying Active Directory to discover the sync server for a given user. The user attribute that stores the sync server for a user is called ms-ds-syncServerUrl. This blog post discusses the user attribute in details. If AD discovery fails, the local discovery will be performed on the server that handled the sync request. If AD discovery is successful, the local discovery will be performed on the server that is listed for the user in the ms-ds-syncServerUrl attribute.

Local discovery

Local discovery is about finding the correct sync share for a given user by checking to see on which sync share they have sync permissions. Here are the possible results:

  • No match: There aren’t any sync shares on this server that user can sync. Local discovery will fail, and the user will be notified that they aren’t set up on the server to use Work Folders.
  • One match: If one sync share is found for the given user, sync will be performed between the user’s devices and this sync share on the server.
  • Multiple matches: This is an unsupported scenario, in which an administrator has mistakenly set up overlapping security groups for a given user. If this happens, Works Folders does its best to find a match. The goal is to find the existing user folder if possible:
  • If there is already a user folder created for the user on a single sync share, Work Folders uses the sync share that contains the user folder.
  • If there are no user folders for the user on any sync share, Work Folders picks one, and sticks with it for all future syncing.
  • If there are user folders for the user on more than one sync share, there is no way for Work Folders to figure out which one is correct. In this case, it displays an error on the device, which administrator must correct.

Things that can delay discovery

Ideally, discovery happens as soon as group membership changes. In reality, there are some cases when discovery doesn’t happen immediately:

  • Because of the AD discovery rely on the attribute stored in the AD, if the sync servers are communicating with different domain controllers, there could be delays because of the AD replication.
  • If the authentication method is configured using ADFS, rediscovery will wait till the access token becomes invalid. The delay can be up to 15 minutes.
  • If the device is a domain-joined machine, and the user logs on the device using their domain credentials, Work Folders will use a Kerberos token for authentication. The Kerberos token lifetime is configurable by domain admins, by default, it can be set to several hours. This page explains the details of how to configure Kerberos token polices.

Conclusion

AD discovery is useful when you have multiple sync servers - it makes it easy to move users to different sync shares on different sync servers without requiring them to do anything. The move occurs when the user’s device connects to the sync server. In Amy’s case, her device finds the new sync share seamlessly through discovery, letting her continue to sync her files without any hassles.

Storage Replica Guide Released for Windows Server Technical Preview

$
0
0

With the release of the Windows Server Technical Preview, we’ve unveiled a new feature, Storage Replica. Today we released a step-by-step guide to match.

Storage Replica enables storage-agnostic, block-level, synchronous replication between clusters or servers for disaster recovery, as well as stretching of a failover cluster for high availability. Synchronous replication enables mirroring of data in physical sites with crash-consistent volumes ensuring zero data loss at the file system level. Asynchronous replication allows site extension beyond metropolitan ranges with the possibility of data loss.

To help you get familiar with Storage Replica, we have a downloadable guide to provide you with step-by-step instructions for evaluating the Stretch Cluster and the Server-to-Server scenarios. These are both designed for Disaster Recovery and provide “over the river” synchronous metro replication.

clip_image002

Windows Server Technical Preview implements the following features in Storage Replica:

Feature
Notes
Type
Host-based
Synchronous
Yes
Asynchronous
Yes (server to server only)
Storage hardware agnostic
Yes
Replication unit
Volume (Partition)
Windows Server Stretch Cluster creation
Yes
Write order consistency across volumes
Yes
Transport
SMB3
Network
TCP/IP or RDMA
RDMA
iWARP, InfiniBand
Replication network port firewall requirements
Single IANA port (TCP 445 or 5445)
Multipath/Multichannel
Yes (SMB3)
Kerberos support
Yes
Over the wire encryption and signing
Yes (SMB3)
Per-volume failovers allowed
Yes
Dedup & BitLocker volume support
Yes
Management UI in-box
Windows PowerShell, Failover Cluster Manager

This is an early pre-release build. Many of the features and scenarios are still in development, the experiences are still evolving. At this stage, Windows Server Technical Preview and Storage Replica are not intended for production environments, only for introductory evaluation.

Download the guide:

Download Windows Server Technical Preview evaluations:

For feedback:

Thanks for your evaluation and feedback, it is always appreciated. Much more varied content and news to come as the release cycle evolves.

- Ned “robot lawyer” Pyle

Come see Windows Server Technical Preview sessions at TechEd Europe next week!

$
0
0

Hi folks, Ned here again. I will be presenting at TechEd Europe in Barcelona Oct 28-31, with my focus on Storage Replica. There are plenty of great presentations on our latest Windows Server 2012 R2 tech and dives into new Windows Server Technical Preview solutions, such as cluster rolling upgrades, scale out storage changes, the new MS Cloud Platform System done in partnership with Dell, storage QoS, and more.

Here's a good intro to whet your appetite:

  • CDP-B222 Software Defined Storage in the Next Release of Windows Server - Tuesday, October 28 5:00 PM - 6:15 PM Room: Hall 8.0 Room B4
  • CDP-B225 Software Defined Compute in the Next Release of Windows Server Hyper-V - Tuesday, October 28 1:30 PM - 2:45 PM Room: Hall 8.0 Room E1
  • CDP-B246 Sneak Peek into the Next Release of Windows Server Hyper-V - Wednesday, October 29 8:30 AM - 9:45 AM Room: Hall 8.0 Room B4
  • CDP-B318 Building Scalable and Reliable Backup Solutions in the Next Release of Windows Server Hyper-V - Tuesday, October 28 1:30 PM - 2:45 PM Room: Hall 8.1 Room L
  • CDP-B323 Delivering Predictable Storage Performance with Storage Quality of Service in the Next Release of Windows Server - Wednesday, October 29 8:30 AM - 9:45 AM Room: Hall 8.0 Room E9
  • CDP-B325 Design Scale-Out File Server Clusters in the Next Release of Windows Server - Friday, October 31 8:30 AM - 9:45 AM Room: Hall 8.0 Room B1
  • CDP-B330 Network Infrastructure Services in the Next Release of Windows Server for Datacenter Network Operations - Wednesday, October 29 10:15 AM - 11:30 AM Room: Hall 8.0 Room B1
  • CDP-B339 Leveraging SAN Replication for Enterprise Grade Disaster Recovery with Azure Site Recovery and System Center - Wednesday, October 29 3:15 PM - 4:30 PM Room: Hall 8.1 Room I
  • CDP-B340 Using Tiered Storage Spaces for Greater Performance and Lower Costs - Thursday, October 30 12:00 PM - 1:15 PM Room: Hall 8.0 Room B4
  • CDP-B341 Architectural Deep Dive into the Microsoft Cloud Platform System - Wednesday, October 29 12:00 PM - 1:15 PM Room: Hall 8.0 Room B1
  • CDP-B352 Stretching Failover Clusters and Using Storage Replica for Disaster Recovery in the Next Release of Windows Server - Wednesday, October 29 5:00 PM - 6:15 PM Room: Hall 8.1 Room L  me me me :-)
  • CDP-B354 Advantages of Upgrading Your Private Cloud Infrastructure in the Next Release of Windows Server - Wednesday, October 29 10:15 AM - 11:30 AM Room: Hall 8.1 Room H

 

Amazingly, there are still a handful of slots left. Don't miss out on your chance to talk to the Microsoft engineering teams that make your favorite - or least favorite - products and give us your feedback, comments, questions, and desires. I hope to see you all down in the booths demanding swag!

 - Ned "¡Hola!" Pyle

Wednesday, October 29 10:15 AM - 11:30 AM Room: Hall 8.0 Room B1

Automatic RMS Protection of non-MS Office files using FCI and the Rights Management Cmdlets

$
0
0

File Classification Infrastructure (FCI) is a built-in feature on Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2 that helps IT admins to manage their organization's data on file servers by providing automatic classification processes. Using rules which are constructed with regular expressions, PowerShell, and/or .NET or native modules, FCI can identify sensitive files and perform actions such as encrypting Microsoft Office documents with Rights Management Services (RMS), expiring files that have passed a defined date limit, or other custom action (defined through a script/program). FCI provides an extensible infrastructure that enables organizations to construct rich end-to-end classification solutions built upon Windows. For more information on FCI please check this blog post.

By default, FCI's built-in tasks can only encrypt Microsoft Office documents with Rights Management Services (RMS). By using a custom FCI task and the Rights Management (Microsoft.Protection) cmdlets, IT admins can apply RMS protection to any file in a file share. Once the files are protected, only authorized users will be able access those files even if they are copied to another location.

Install the Microsoft.Protection PowerShell Cmdlets

  1. Install the AD RMS Client. This can be done using PowerShell with the following commands:
    > Invoke-WebRequest http://download.microsoft.com/download/3/C/F/3CF781F5-7D29-4035-9265-C34FF2369FA2/setup_msipc_x64.exe -OutFile setup_msipc_x64.exe> .\setup_msipc_x64.exe /quiet
  2. Download the Microsoft.Protection PowerShell cmdlets (available in CTP through Microsoft Connect; nonetheless, fully supported in production environments):
    1. Navigate to Microsoft Connect and sign in with your Microsoft Account.
    2. Register with the Rights Management project for the Microsoft.Protection PowerShell cmdlets (if you have already done this, please skip to step C). Search on the front page of Microsoft Connect for Rights Management Services. The appropriate program to 'join' is "Rights Management Services SDK".
    3. Download the Microsoft.Protection PowerShell cmdlets from HERE.
    4. Unzip the Microsoft.Protection zip file and run the following commands as an administrator (in the newly unzipped folder):
      > Set-ExecutionPolicy Unrestricted -Force> Get-ChildItem | Foreach-Object { Remove-Item $_.Name -Stream Zone.Identifier -ErrorAction Ignore }> .\Install.ps1
  3. Add the necessary registry keys and values to the registry to allow non-Office files to be encrypted by the Microsoft.Protection. This can be done automatically with the following PowerShell commands:
    > New-Item -Path HKLM:\Software\Microsoft\MSIPC\FileProtection\*> New-ItemProperty -Path HKLM:\Software\Microsoft\MSIPC\FileProtection\* -Name Encryption -PropertyType String -Value Pfile
  4. Reboot your server before continuing on.

Configure the Microsoft.Protection Cmdlets to be used with Azure RMS

The Microsoft.Protection Cmdlets can be used with either the on-prem version of RMS or with Azure RMS. If you intend to use FCI with the on-prem version of RMS, you may skip this section. To enables Azure RMS, do the following steps:

  1. Enable Azure Rights Management Service:
    1. Download the Microsoft Online Services Sign-In Assistant from here.
      > Invoke-WebRequest http://download.microsoft.com/download/C/1/7/C17BEB52-BB8A-4C7F-86F3-AAF17BB3682A/msoidcli_64.msi -OutFile msoidcli_64.msi> .\msoidcli_64.msi /quiet
    2. Download and install the Azure Rights Management Administration Tool from here.
      > Invoke-WebRequest http://download.microsoft.com/download/1/6/6/166A2668-2FA6-4C8C-BBC5-93409D47B339/WindowsAzureADRightsManagementAdministration_x64.exe -OutFile WindowsAzureADRightsManagementAdministration_x64.exe> .\WindowsAzureADRightsManagementAdministration_x64.exe /quiet
    3. Import the Azure RMS module by using the following cmdlet:
      > Import-Module AADRM
    4. Connect to the service with your administrator credentials (will prompt for credentials):
      > Connect-AadrmService -Verbose
    5. Enable Azure RMS in your organization:
      > Enable-Aadrm
    6. Capture the AADRM Configuration:
      > $AadrmConfig = Get-AadrmConfiguration
  2. Services need to use service principals (also known as service identities), which are a type of credentials that are configured globally for access control. Service principals allow your service to authenticate directly with Microsoft Azure AD and to protect information using the Microsoft Azure AD Rights Management Service. To create a service principal:
    1. Install the Microsoft Azure AD Module for Windows PowerShell from here.
      > Invoke-WebRequest http://go.microsoft.com/fwlink/p/?linkid=236297 -OutFile AdministrationConfig-en.msi> .\AdministrationConfig-en.msi /quiet
    2. Import the Microsoft Azure AD module using the following cmdlet:
      > Import-Module MSOnline
    3. Connect to your online service with your administrator credentials (will prompt for credentials):
      > Connect-MsolService
    4. Create a new service principal by running:
      > $ServicePrincipal = New-MsolServicePrincipal -DisplayName ExampleServicePrincipal
    5. Make note of the symmetric key that is written out to the window. We will need it going forward, and the symmetric key is only available when it is created.
  3. Configure the Microsoft.Protection cmdlets to work with Azure RMS:
    > Set-RmsServerAuthentication -Key <PASTE SYMMETRIC KEY HERE> -AppPrincipalId $ServicePrincipal -BposTenantId $AadrmConfig.BPOSId

FCI Integration with the Microsoft.Protection Cmdlets

To protect non-Office files with RMS, we need to create a PowerShell script that will utilize the Microsoft.Protection cmdlets. Here is a working sample script that will encrypt non-Office documents. You may wish to modify it to perform more advance functions (such as emailing the owner to notify him that his file was encrypted):

# Parameters to set in the File Management Task in File Server Resource Manager
param([string]$FileToEncrypt, [string]$RmsTemplate="", [string]$RmsServer="", [string]$OwnerEmail)

#
# Main Routine Begin 
#
Add-PSSnapin Microsoft.Protection

# Double check $RmsServer matches an existing server
if ($RmsServer.Trim() -ne "") {
    $count = (Get-RMSServer | Where-Object { $_.DisplayName -eq $RmsServer.Trim() }).Count
    if ($count -ne 1) {
        throw [System.ArgumentException] "RmsServer does not match any visible RMS Servers"
        exit -1
    }
}

# Lookup RMS Template ID
if ($RmsTemplate.Trim() -ne "") {
    if ($RmsServer.Trim() -ne "") {
        $template = (Get-RMSTemplate -RmsServer $RmsServer.Trim() | Where-Object { $_.Name -eq $RmsTemplate.Trim() })
    }
    else {
        $template = (Get-RMSTemplate | Where-Object { $_.Name -eq $RmsTemplate })
    }

    if ($template -ne $null) {
        $RmsTemplateId = $template.TemplateId
    }
    else {
        throw [System.ArgumentException] "The RmsTemplate provided does not match any visible RMS Templates"
        exit -1
    }
}
else {
    throw [System.ArgumentException] "The RmsTemplate provided is empty"
    exit -1
}


# Do not attempt to reencrypt files
if ($FileToEncrypt -like "*.pfile") {
	exit 0
}

$EncryptedFile = ""

try {
	# Encrypt file 
	$out = Protect-RMSFile -File $FileToEncrypt -TemplateID $RmsTemplateId -OwnerEmail $OwnerEmail
	$EncryptedFile = $out.EncryptedFile
} 
catch {
	$ExceptionMessage = "Encryption of " + $FileToEncrypt + " failed."
	throw [System.Exception] $ExceptionMessage
    exit -1
}

#exit 0
			

Copy the above script to a new file called C:\Windows\System32\FciRmsFileProtection.ps1.

The following PowerShell commands will create a custom file management task that will use this script to RMS encrypt a file whenever the file is classified as HBI. You may also create a custom file management task from the FSRM GUI. Replace the RMS Template with one that matches a template in your organization (more information about how to find this below; Get-RMSTemplate):

$Command = "C:\Windows\System32\WindowsPowerShell\v1.0\PowerShell.exe"
$CommandParameters = "C:\Windows\System32\FciRmsFileProtection.ps1 -FileToEncrypt [Source File Path] -RmsTemplate 'Contoso All - All Rights' -OwnerEmail [Source File Owner Email]"
$Action = New-FSRMFmjAction -Type Custom -Command $Command -CommandParameters $CommandParameters -SecurityLevel LocalSystem -WorkingDirectory "C:\Windows\System32\WindowsPowerShell\v1.0\"

$Condition = New-FsrmFmjCondition -Property "Impact_MS" -Condition Equal -Value 3000

$Schedule = New-FsrmScheduledTask -Time (Get-Date) -Weekly Sunday

New-FsrmFileManagementJob -Name "Test RMS Encrypt" -Namespace "C:\Shares" -Action $Action -Condition $Condition -Schedule $Schedule -Continuous

Learn more about the Microsoft.Protection Cmdlets

  • To get the RMS server name to be used, run this command:
    Name: Get-RMSServer
    Synopsis: Returns the list of all AD RMS servers that can issue templates for the user.
    Syntax: Get-RMSServer [<CommonParameters>]
    Description: The Get-RMSServer cmdlet returns a list of all AD RMS servers that can issue templates for the current user.
  • To get the RMS template GUID to be used, run this command:
    Name: Get-RMSTemplate
    Synopsis: Returns a list of AD RMS templates.
    Syntax: Get-RMSTemplate [-Force ] [-RMSServer ] []
    Description: The Get-RMSTemplate cmdlet returns a list of templates.
  • To protect a file:
    Name: Protect-RMSFile
    Synopsis: Protects using RMS encryption the specified file or the files in specified folder.
    Syntax: Protect-RMSFile -File [-DoNotPersistEncryptionKey ] [-OutputFolder ] [-TemplateId ] []
    Description: The Protect-RMSFile cmdlet protects and encrypts a specified file or the files in a specified folder if they were previously unprotected. The Protect-RMSFile cmdlet will run and execute in the following modes:
    1. Encrypt a file and let it be encrypted in the default location.
    2. Encrypt a file and let the encrypted file be placed at a new location.
    3. Encrypt a folder. All files inside the folder will be encrypted.

RMS Protected Files on Non-Windows Machines

Files protected by the Cmdlets are accessible by users on all platforms (Android, iOS, Mac, Windows Phone, and Windows) using the RMS sharing apps.

Cluster aggregated view in Windows Server 2012 R2 Storage Provider for a symmetric storage configuration

$
0
0

Blog written by Sripresanna in the Storage, Network & Print Team

 

Cluster aggregated view for Windows Storage Provider

 

In this blog, I’d like to explain about the new cluster aggregated view feature that is part of the Storage Management on Windows Server 2012 R2. For a quick overview of the Storage Management at Microsoft, refer to the Windows 2012 R2 Storage (scroll to SM API) and Getting Started with SMI-S

In a cluster environment, there are two types of storage subsystem:

1. Standalone subsystem: Storage resources that are directly connected to the storage node.

2. Cluster subsystem: Storage resources that are either clustered or clusterable.

a) Clustered - Storage resources that are already part of the cluster. IsClustered Property

b) Clusterable – Storage resources that can be potentially added to the cluster. If a storage object is reachable from a cluster node, then that storage object is considered to be clusterable. Disks (even if disks are not fully shared across all cluster nodes) with below bustypes are in the Clusterable category – “iSCSI”, “SAS”, “Fiber Channel”, and potentially “Storage Spaces (if pool is in shared view)”

Challenge: In Windows 2008, the management layer of cluster storage required administrator to do extra task to view all the storage resources. The administrator had to go to each cluster node, enumerate the storage and combine the list to understand what storage is clustered and what is local to the node.

Solution: In Windows 2012 R2, we introduced a feature called cluster aggregated view. With this feature, the Storage Provider performs the above tasks for the administrator and presents a unified view called cluster aggregated view. As a result, the administrator can view both local and cluster subsystem resources and uniquely identify the objects from any cluster nodes without performing any extra task. Thus, saves time and makes the management of storage subsystem easier. This feature is implemented in the Windows storage provider and other SMP providers can chose to do it.

 

Getting Started Guide: In this part, I’ll walk you step-by-step to illustrate the cluster aggregate feature in Windows Server 2012 R2 and its implications on a Symmetric storage configuration

In a symmetric storage configuration, all the cluster capable drives are connected to all the storage nodes. In this example, there are two cluster nodes viz. Keshan-VM-4 and Keshan-VM-7. The below table shows the storage subsystem for both the nodes.

 

Node

Standalone Subsystem

Cluster Subsystem

Keshan-VM-4

Storage Spaces on Keshan-VM-4

Clustered Storage Spaces on keshan-th1

Keshan-VM-7

Storage Spaces on Keshan-VM-7

Clustered Storage Spaces on keshan-th1

 

clip_image002

clip_image004

PowerShell snippet showing the Storage Subsystem from node1 and node2

 

a. Enumerate physical disks in each subsystem

 

Enumerating physical disks (using Get-PhysicalDisk) shows drives in the standalone subsystem of the current node and the aggregation of drives in the cluster subsystem across all cluster nodes.

Keshan-VM-4 has only the OS disk in its standalone subsystem and Keshan-VM-7 has three. On both nodes the cluster subsystem contains the 10 SAS disks shared between the two cluster nodes.

clip_image006

PowerShell snippet showing the Physical disks in each Storage subsystem on node - Keshan-VM-4

clip_image008

PowerShell snippet showing the Physical disks in each Storage subsystem on node - Keshan-VM-7

 

b. Creating a Storage pool

 

In Win8, a pool was created using drives with “CanPool” property of “true”. In R2, this still works for a standalone system containing only one subsystem. On a cluster node, which now has two subsystems, here’s the mechanism to create a pool.

To create a pool in the Standalone subsystem, use the local poolable physical disks. Enumerating the drives using Get-StoragePool shows the pools in the standalone subsystem of the current node and the aggregation of the pools in the cluster subsystem across all cluster nodes. The newly created pool “localpool” is in the local subsystem of the node “Keshan-VM-7”.

clip_image010

“localpool” will not be seen in the pool aggregated view on the other cluster node “Keshan-VM-4”

clip_image012

PowerShell snippet to create a pool on local subsystem of Keshan-VM-4

When the pool is created in the cluster subsystem, it will appear in the pool aggregated view on all of the cluster nodes.

clip_image014

PowerShell snippet to create a pool on local subsystem from node Keshan-VM-4

 

Few things to note:

- A pool can be created only using drives that belong to the same subsystem

- A pool created on a cluster subsystem is by default clustered (added to the cluster). To disable auto-clustering of a pool, the “EnableAutomaticClustering” flag needs to be set to “false” on the cluster subsystem

clip_image016

clip_image018

From then on, to add the pool to the cluster, the Add-ClusterResource cmdlet should be used.

 

c. Creating virtual disk

 

clip_image020

PowerShell snippet to enumerate pool in local Subsystem, create Virtual Disk and enumerate it

clip_image022

PowerShell snippet to enumerate pool in cluster Subsystem, create Virtual Disk and enumerate it

 

Pool and Space owner nodes

 

In R2, a new StorageNode object has been introduced to represent a node. This is used in the following scenarios:

- To determine storage topology i.e. connectivity of physical disks and enclosures to cluster nodes

- To find out on which node the pool/Space is Read-Write – pool owner and space owner nodes.

The below output for Get-StorageNode executed from cluster node “Keshan-VM-4” shows the storage nodes in the local and cluster subsystems.

clip_image024

In the cluster, the pool and Space are Read-Write, only on one of the nodes. To get the owner node, use the StorageNode to StoragePool/VirtualDisk association. The below screenshot shows that node “Keshan-VM-4” is the pool owner of the clustered pool “cluspool”. This was run from “Keshan-VM-4”

clip_image026

The below snippet shows that “Keshan-VM-4” is also the Space owner node for “clusVD”.

clip_image028

Cluster aggregated view in Windows Server 2012 R2 Storage Provider for an asymmetric storage configuration

$
0
0

Blog written by Sripresanna in the Storage, Network & Print Team.

 

Cluster aggregated view for Windows Storage Provider (Asymmetric storage configuration)

 

This is a continuation of previous blog, where I explained cluster aggregated view for Windows Storage Provider for symmetric storage configuration. We strongly encourage to use Symmetric storage configuration and this blog is just to illustrate how cluster aggregate feature (in Windows Server 2012 R2) will work for an Asymmetric storage configuration.

Consider the asymmetric storage configuration in the below cluster:

clip_image002

 

The cluster aggregated view for storage objects works the same way for symmetric and asymmetric storage (not fully shared storage across all cluster nodes) configurations. The two differences for asymmetric storage is that:

  1. All SAS based physical disks are shown in the cluster aggregated view though not all of them can be part of the same concrete storage pool, since there isn’t a common node where all are connected.
  • For example, if a SAS physical disk PD4 is connected only to cluster node N3, is not an OS disk, (what other criteria – size, no partitions) cluster UI will not show this disk as it cannot be added to the cluster (since it is not shared with any of the other cluster nodes). But the aggregated view (Get-PhysicalDisk) will show this disk.
  • This is because we cannot detect if the drive is truly intended to be connected just to that node or if the link from the drive to another cluster node is intermittently down.
  • The primordial pool of the cluster subsystem contains the aggregated set of drives with “CanPool” equals “true”. But creating a concrete pool from this may fail if just this criteria was used to pick the drives. Note that concrete pools can be formed only with drives that are
  • Physically connected to a given set of nodes.
  • All within the same storage subsystem

The cluster primordial pool will contain these Physical disks – {P1, P2, P3, P4}. To create a concrete pool on the cluster, the StorageNode  PhysicalDisk association should be used to determine the set of disks that are physically attached to a set of nodes (and CanPool = true) and then form the concrete pool with those Physical Disks. In this case such valid sets would be (Sample script in the last section )

N1-N2-N3/N1-N2/N2-N3/N1-N3/N1: {PD1, PD3}

N2: {PD1, PD2, PD3}, {PD1, PD2}, {PD2, PD3}, {PD1, PD3}, {PD1}, {PD2}, {PD3}

N3: {PD1, PD3, PD4}, {PD1, PD3}, {PD3, PD4}, {PD1, PD4}, {PD1}, {PD3}, {PD4}  

Here’s the asymmetric storage cluster which will be used to walkthrough the below workflow.

clip_image004

 

Two Storage Subsystems

 

This also has the local and cluster storage subsystems

clip_image006

clip_image008

PowerShell snippet to show the 2 Storage subsystem on each node

 

a. Enumerate physical disks in each subsystem

 

clip_image010

PowerShell snippet showing the Physical Disk in each Storage subsystem on node 1, 2, 3 and 4

 

b. Creating a Storage pool

 

The below two PowerShell snippet enumerates primordial pools in local and cluster Storage System. You will notice that primordial pool in cluster Storage System from all nodes show same output.

clip_image012

clip_image014

clip_image016

PowerShell snippet to create a pool on local storage system of node1 & enumerate

clip_image018

PowerShell snippet to create a pool on cluster subsystem from node1 & enumerate

 

Few things to note:

  • A pool can be created only using drives that belong to the same subsystem
  • A pool can be created using drives that are physically connected to a given set of nodes.
  • A pool created on a cluster subsystem is by default clustered (added to the cluster). To disable auto-clustering of a pool, the “EnableAutomaticClustering” flag needs to be set to “false” on the cluster subsystem

The below PowerShell snippet illustrates this.

clip_image020

Set-StorageSubsystem –EnableAutomaticClustering $false on the clustered subsystem

clip_image022

From then on, to add the pool to the cluster, the Add-ClusterResource cmdlet should be used.

 

c. Creating virtual disk

 

clip_image024

PowerShell snippet to enumerate pool in local Storage System, create Virtual Disk and enumerate it

clip_image026

PowerShell snippet to enumerate pool in local Storage System, create Virtual Disk and enumerate it

clip_image028

 

Script to generate sets

 

$count = Read-Host -Prompt "Enter number of nodes"
[String[]] $nodesarray = @()
For ($i=0; $i -lt $count; $i++)
{
  $nodesarray += Read-Host -Prompt "Enter node"
}
$temp = Get-StorageNode -name $nodesarray[0] | Get-PhysicalDisk | select UniqueId
$PD = $temp | select -expand UniqueId –Unique
$NPD = $PD
foreach ($element in $nodesarray)
{
  $temp = Get-StorageNode -name $element | Get-PhysicalDisk | select UniqueId
  $PD = $temp | select -expand UniqueId –Unique
  $NPD = $NPD | ?{$PD -contains $_}
}
$PD = @()
foreach ($element in $NPD)
{
  Write-Host $element
  $PD += Get-PhysicalDisk -UniqueId $element
}
$PD
Get-StorageSubSystem -FriendlyName "clu*" | New-StoragePool -FriendlyName "testpool" -PhysicalDisks $PD


Sizing Volumes for Data Deduplication in Windows Server

$
0
0

Introduction

One of the most common questions the Data Deduplication team gets seems deceptively simple: "How big can I make my deduplicated volumes?"

The short answer is: It depends on your hardware and workload.

A slightly longer answer is: It depends primarily on how much and how frequently the data on the volume changes, and on the data access throughput rates of the disk storage subsystem.

The Data Deduplication feature in Windows Server performs a number of IO and compute intensive operations. In most deployments, deduplication operates in the background or on a daily schedule on that day's new or modified data (i.e. data "churn"). As long as deduplication is able to optimize all of the data churn on a daily basis, the volume size will work for deduplication. On the other hand, we've seen customers create a 64TB volume, enable deduplication, and then notice low optimization rates. This is simply due to deduplication not being able to keep up with the incoming churn from a dataset that is too large on a configured volume. Deduplication jobs in Windows Server 2012 and 2012 R2 are scoped at a volume level and are single threaded (one core per volume). Therefore, in order to exploit additional compute power of the machine with deduplication enabled volumes, the dataset should be distributed over more volumes instead of creating a single large volume with all the data.

Checking Your Current Configuration

If you have an existing system with deduplication enabled on one or more volumes, you can do a quick check to see if your existing volume sizes are adequate.

The following script can help quickly answer if your current deduplication volume size is appropriate for the workload churn happening on the storage or if it is regularly falling behind.

  • Run your workload normally on the volume as intended (store data there and use it as you would in production)
  • Run this script and note the result:

$ddpVol = Get-DedupStatus <volume>

switch ($ddpVol.LastOptimizationResult) { 

   0 { write-host "Volume size is appropriate for server." }

   2153141053 { write-host "The volume could not be optimized in the time available. If this persists over time, this volume may be too large for deduplication to keep up on this server." }

   Default { write-host "The last optimization job for this volume failed with an error. See the Deduplication event log for more details." }

}

If the result is that the volume size is appropriate for your server, then you can stop here (and work on your other tasks!)

If the result from the above script is that the volume cannot be optimized in the time available, administrators should determine the appropriate size of the volume for the given time window to complete optimization.

Estimating Deduplication Volume Size

Let's start with some basic principles:

  • Deduplication optimization needs to be able to keep up with the daily data churn
  • The total amount of churn scales with the size of the volume
  • The speed of deduplication optimization depends significantly on the data access throughput rates of the disk storage subsystem.

Therefore, to know how to estimate the maximum size for a deduplicated volume, it is required to understand the size of the data churn and the speed of optimization processing.

The following sections provide guidance on how to determine maximum volume size using two different methods to determine data churn and deduplication processing speed:

  • Method 1: Use reference data from our internal testing to estimate the values for your system
  • Method 2: Perform measurements directly on your system based on representative samples of your data

Scripts are provided to then calculate the maximum volume size using these values.

  

Estimating Deduplication Volume Size - Method 1 (Easier but less accurate)

From internal testing, we have measured deduplication processing, or throughput, rates that vary depending on the combination of the underlying hardware as well as the types of workloads being deduplicated. These measured rates can be used as reference points for estimating the rates for your target configuration. The assumption is that you can scale these values according to your estimate of your system and data workload.

For roughly estimating deduplication throughput, we have broken data workloads into two broad types.

  • General-Purpose File Server– Characterized by existing data that is relatively static or with few/infrequent changes and new data that is generally created as new files
  • Hyper-V (VDI and virtualized backup)– Characterized by Virtual Machine data which is stored in VHD files. These files are typically held open for long periods of time with new data in the form of frequent updates to the VHD file.

Notice that the form the data churn takes is very different between the general-purpose file server and the Hyper-V workloads. With the general-purpose file server, data churn usually takes the form of new files. With Hyper-V, data churn takes the form of modifications to the VHD file.

Because of this difference, for the general-purpose file server we normally talk about deduplication throughput in terms of time to optimize the amount of new file data added and for Hyper-V we normally talk about this in terms of time to re-optimize an entire VHD file with a percentage of changed data. The two sections below show how to do the volume size estimate for these two workloads for Method 1.

Notes on the script examples

The script examples given in this section make two important assumptions:

  • All storage size numbers assume binary prefixes. Throughout this article, binary prefixes are implied when measuring storage sizes. This means that when the term "megabyte", or "MB", is used, this means (1024)*(1024) bytes, and when the term "gigabyte", or "GB", is used this means (1024)*(#MBs), and so on. All calculations and displayed numbers for storage sizes in Windows Server 2012 and Windows Server 2012 R2 follow this same convention.
  • Throughput processing time estimates are rounded up. Queuing theory states that a system's processing (service) rate must be greater than (and not just equal to) the incoming job (generation) rate or eventually the queue will always grow to infinity. The scripts round up the ratio of the optimization time required to the optimization window length. However, it is recommended to be conservative when specifying your daily optimization window and in general to use a lower number than the maximum time expected. If your environment is expected to have a high level of variability in data churn, further scale down your estimated optimization window length accordingly.

General Purpose File Server – Method 1

As noted above, the deduplication of general-purpose file server workloads is primarily characterized by the optimization throughput of new data files. We have taken measurements of this throughput rate for two different hardware configurations running both Windows Server 2012 and Windows Server 2012 R2. The details of the system configurations are listed below. Since the throughput rate is primarily dependent on the overall performance of the storage subsystem, you can scale these rates according to your estimate of your system's performance compared to these reference configurations. Scale up the throughput for higher performance storage and scale down the throughput for lower performance storage.

The table below lists the typical re-optimization deduplication throughput rates for General Purpose File Server workloads for the two tested reference systems.

Deduplication throughput rates for new file data (general-purpose file server workload)

System 1

Drive Types

SATA (7.2K RPM)

Raw disk speed(Read/Write)

129 MBps/109 MBps

Drive configuration

3 drives, spanned (RAID 0) into single volume

Memory

12 GB

Processor

2.13 GHZ, 1 x L5630, quad core with Hyper Threading

System 2

Drive Types

SAS (15K RPM)

Raw disk speed(Read/Write)

204 MBps/202 MBps

Drive configuration

4 drives, spanned (RAID 0) into single volume

Memory

16 GB

Processor

2.13 GHZ, 1 x L5630, quad core with Hyper Threading

Windows Server 2012

~22 MB/s

~26 MB/s

Windows Server 2012 R2

~23-31 MB/s

~45-50 MB/s

  

Two points to note from the measured throughput rates in the table:

  • Throughput increases from System 1 to System 2 as expected given the increase in drive performance and number of drives used
  • Throughput increases overall in the Windows Server 2012 R2 release, and more for the SAS configuration. This is due to overall efficiency enhancements as well as the use of read-ahead which leverages the queueing capabilities of SAS drives.

Rough guidelines for estimating the typical churn rates of General Purpose File Servers are to use values in the 1% to 6% range. For the examples below, a conservative estimate of 5% is used.

Given the typical optimization throughput values from the table and using an estimate of the churn rates of the files, administrators can estimate if deduplication can keep up with their needs by using the following script to calculate a volume size recommendation.

# General Purpose File Server (GPFS) workload volume size estimation

# TotalVolumeSizeGB = total size in GB of all volumes that host data to be deduplicated

# DailyChurnPercentage = percentage of data churned (new data or modified data) daily

# OptimizationThroughputMB = measured/estimated optimization throughput in MB/s

# DailyOptimizationWindowHours = 24 hours for background mode deduplication, or daily schedule length for throughput optimization

# DeduplicationSavingsPercentage = measured/estimated deduplication savings percentage (0.00 – 1.00)

# FreeSpacePercentage = it is recommended to always leave some amount of free space on the volumes, such as 10% or twice the expected churn

write-host "GPFS workload volume size estimation"

$TotalVolumeSizeGB = Read-Host 'Total Volume Size (in GB)'

$DailyChurnPercentage = Read-Host 'Percentage data churn (example 5 for 5%)'

$OptimizationThroughputMB = Read-Host 'Optimization Throughput (in MB/s)'

$DailyOptimizationWindowHours = Read-Host 'Daily Optimization Window (in hours)'

$DeduplicationSavingsPercentage = Read-Host 'Deduplication Savings percentage (example 70 for 70%)'

$FreeSpacePercentage = Read-Host 'Percentage allocated free space on volume (example 10 for 10%)'

# Convert to percentage values

$DailyChurnPercentage = $DailyChurnPercentage/100

$DeduplicationSavingsPercentage = $DeduplicationSavingsPercentage/100

$FreeSpacePercentage = $FreeSpacePercentage/100

# Total logical data size

$DataLogicalSizeGB = $TotalVolumeSizeGB * (1 - $FreeSpacePercentage) / (1 - $DeduplicationSavingsPercentage)

# Data to optimize daily

$DataToOptimizeGB = $DailyChurnPercentage * $DataLogicalSizeGB

# Time required to optimize data

$OptimizationTimeHours = ($DataToOptimizeGB / $OptimizationThroughputMB) * 1024 / 3600

# Number of volumes required

$VolumeCount = [System.Math]::Ceiling($OptimizationTimeHours / $DailyOptimizationWindowHours)

# Volume size

$VolumeSize = $TotalVolumeSizeGB / $VolumeCount

write-host

write-host "Data to optimize daily: $DataToOptimizeGB GB"

write-host "Hours required to optimize data: $OptimizationTimeHours"

write-host "$VolumeCount volume(s) of size $VolumeSize GB is recommended to process"

write-host

Example 1:

Assume a general-purpose file server with 8 TB of SAS storage available is running Windows Server 2012 R2 with deduplication enabled is scheduled to operate in throughput mode at night for 12 hours. From the Server Manager UI or the cmdlet get-dedupvolume, the admin sees deduplication is reporting 70% savings.

Using the table above, we get the typical optimization throughput for SAS (45 MB/s) and assume 5% file churn for the General Purpose File Server.

After plugging in these input values of the scenario into the script:

PS C:\deduptest> .\calculate-gpfs.ps1

GPFS workload volume size estimation

Total Volume Size (in GB): 8192

Percentage data churn (example 5 for 5%): 5

Optimization Throughput (in MB/s): 45

Daily Optimization Window (in hours): 12

Deduplication Savings percentage (example 70 for 70%): 70

Percentage allocated free space on volume (example 10 for 10%): 10

the calculation script outputs:

Data to optimize daily: 1365.33 GB

Hours required to optimize data: 8.630

1 volume(s) of size 8192 GB is recommended.

So, we can expect a server with a single 8 TB volume and 5% churn to be able to process the ~1.4 TB of changes in 8.6 hours. The deduplication server should be able to complete the optimization work within the scheduled 12-hour night window.

Example 2:

We can also see if the same server was using SATA instead of SAS ($OptimizationThroughputMB = 23 MB/s), the script would recommend having 2 volumes to complete the optimization work within the same 12-hour window.

Data to optimize daily: 1365.33 GB

Hours required to optimize data: 16.885

2 volume(s) of size 4096 GB is recommended.

If a 17 hour optimization window were available for the same SATA hardware only a single 8 TB volume would be needed.

Data to optimize daily: 1365.33 GB

Hours required to optimize data: 16.885

1 volume(s) of size 8192 GB is recommended.

  

Hyper–V (VDI and Virtualized Backup) – Method 1

As noted above, the deduplication of Hyper-V workloads is primarily characterized by the re-optimization throughput of existing VHD files. We have taken measurements of this throughput rate for a VDI reference hardware deployment running Windows Server 2012 R2.

The table below lists the measured re-optimization deduplication throughput rates for Hyper-V VDI workloads running on the VDI reference system.

Deduplication throughput rates for VHD files (Hyper-V VDI workload) 2

Storage Spaces configuration (SSD, HDD)1

Hyper-V

(Windows Server 2012 R2)

Re-optimization (background mode) of VHD file: ~200 MB/s

Re-optimization (throughput mode) of VHD file: ~300 MB/s

1 Using a VDI reference hardware deployment (with JBODs) detailed here

2 Note that these rates are much larger than those listed for processing new file data for thegeneral-purpose file server scenario in the previous section. This is not because the actual deduplication operation is faster, but rather because the full size of the file is counted when calculating the rates and for VHD files in this scenario only a small percentage of the data is new.

Note from the table that the throughput rates will typically differ depending on the scheduling mode chosen for deduplication. When the "BackgroundModeOptimization" job schedule is chosen, the optimization jobs are run at low priority with a smaller memory allocation. When the "ThroughputModeOptimization" job schedule is chosen, the optimization jobs are run at normal priority with a larger memory allocation. (For more information on configuring deduplication, refer to Install and Configure Data Deduplication on Microsoft TechNet.)

Rough guidelines for typical churn rates of Hyper-V VDI workloads are usually around 5-10% churn, which is reflected in the deduplication throughput rates listed. If you expect more or less churn, you can scale these values accordingly to estimate the impact on recommended volume size (where the processing rate increases with less churn and decreases with more churn).

Given the typical optimization throughput in the table, administrators can estimate if deduplication can keep up with their needs by using the following script to calculate a volume size recommendation.

# Hyper-V VDI workload volume size estimation

# TotalVolumeSizeGB = total size in GB of all volumes that host data to be deduplicated

# VHDOptimizationThroughputMB = measured/estimated optimization of VHD file throughput in MB/s

# DailyOptimizationWindowHours = 24 hours for background mode deduplication, or daily schedule length for throughput optimization

# DeduplicationSavingsPercentage = measured/estimated deduplication savings percentage (0.00 – 1.00)

# FreeSpacePercentage = it is recommended to always leave some amount of free space on the volumes, such as 10% or twice the expected churn

write-host "Hyper-V workload volume size estimation"

$TotalVolumeSizeGB = Read-Host 'Total Volume Size (in GB)'

$VHDOptimizationThroughputMB = Read-Host 'Optimization Throughput of VHD file (in MB/s)'

$DailyOptimizationWindowHours = Read-Host 'Daily Optimization Window (in hours)'

$DeduplicationSavingsPercentage = Read-Host 'Deduplication Savings percentage (example 70 for 70%)'

$FreeSpacePercentage = Read-Host 'Percentage allocated free space on volume (example 10 for 10%)'

# Convert to percentage values

$DeduplicationSavingsPercentage = $DeduplicationSavingsPercentage/100

$FreeSpacePercentage = $FreeSpacePercentage/100

# Total logical data size

$DataLogicalSizeGB = $TotalVolumeSizeGB * (1 - $FreeSpacePercentage) / (1 - $DeduplicationSavingsPercentage)

# Time required to optimize data

$OptimizationTimeHours = ($DataLogicalSizeGB / $VHDOptimizationThroughputMB) * 1024 / 3600

# Number of volumes required

$VolumeCount = [System.Math]::Ceiling($OptimizationTimeHours / $DailyOptimizationWindowHours)

# Volume size

$VolumeSize = $TotalVolumeSizeGB / $VolumeCount

write-host

write-host "Data to optimize daily: $DataLogicalSizeGB GB"

write-host "Hours required to optimize data: $OptimizationTimeHours"

write-host "$VolumeCount volume(s) of size $VolumeSize GB is recommended to process"

write-host

Example 3:

Assume a Hyper-V VDI configuration with an 8 TB Storage Spaces volume is running Windows Server 2012 R2 with deduplication enabled and scheduled to process deduplication optimization in throughput mode at night for 12 hours. From the Server Manager UI or the cmdlet get-dedupvolume, the admin sees deduplication is reporting 70% savings.

Using the table above, we get the typical optimization throughput for VHD files of 300MB/s.

After plugging in the known values of the scenario into the script:

PS C:\deduptest> .\calculate-vdi.ps1

Hyper-V workload volume size estimation

Total Volume Size (in GB): 8192

Optimization Throughput (in MB/s): 300

Daily Optimization Window (in hours): 12

Deduplication Savings percentage (example 70 for 70%): 70

Percentage allocated free space on volume (example 10 for 10%): 10

the calculation script outputs:

Data to optimize daily: 27306.67 GB

Hours required to optimize data: 25.89

3 volume(s) of size 2730. 67 GB is recommended to process

So, we can expect a server with a single 8TB volume to process the ~27 TB of VHD files in 25.9 hours. Using 3 smaller 2.73TB volumes, the deduplication server should be able to complete the optimization work within the scheduled 12-hour night window.

  

Estimating Deduplication Volume Size - Method 2 (More accurate but more work)

General-Purpose File Server Workloads – Method 2

Administrators can get an even better estimate of proper volume sizing for the deduplication server by measuring the specific server's actual optimization throughput rate. A slightly different method is detailed later for the VDI workload.

This can be done by determining how long it takes to optimize a typical piece of data for a general-purpose file server workload:

  • Copy a reasonable chunk of your actual data (say a folder with ~10GB of files) onto a deduplication enabled test volume with similar performance characteristics to the production volume and determine its size.

    $dataSize = (Get-ChildItem -recurse | Measure-Object -property length -sum).sum / 1MB

  • Set the Minimum File Age Days for deduplication to 0 in order to immediately optimize the sample workload data for the test.

    Set-DedupVolume <volume> –MinimumFileAgeDays 0

  • Start a timed optimization job in the appropriate mode
    • "Background Mode" when optimization will run continuously in parallel with other activities on the server

      $totalTime = Measure-Command {Start-DedupJob –Type Optimization –Volume <volume> -InputOutputThrottleLevel Low –Wait}

    •   "Throughput Mode" when optimization will happen during a dedicated window where it can consume more resources (like during non-business hours at night)

$totalTime = Measure-Command {Start-DedupJob –Type Optimization –Volume <volume> –Wait}

  • Calculate throughput achieved in order to

    $OptimizationThroughputMB = $dataSize/$totalTime.TotalSeconds

  • Use this more accurate value for $OptimizationThroughputMBwith the previous calculation script for "General Purpose File Server – Method 1" above.

Note that this process effectively estimates how fast the specific server can optimize the dedup workload data for general-purpose file server workloads where the first time optimization rate is the same as the re-optimization rate. However, administrators will still want to monitor the deduplication savings to ensure that the expected estimates match the actual results.

Hyper-V (VDI) Workloads – Method 2

For VDI workloads, we cannot use the previous general-purpose file server method for measuring the optimization throughput by copying a chunk of data and timing how long it takes to optimize. For VDI workloads, we need to concentrate on finding the recurring re-optimization rate (instead of the one-time initial optimization rate) for the VHD files. The re-optimization of VHDs when deduplication is configured for "VDI mode" skips certain VM files as well as treating changed parts of the VHD differently than when running in general-purpose file server mode.

The optimization throughput rate can be obtained from the application and service logs as follows:

  1. Prior to taking measurements, allow all VMs to be optimized for the first time and to run normally for a few days in steady state.
  2. From the event logs, under Application and Service Logs Microsoft Windows Deduplication Operational, get the optimization throughput rate from event 6153, "Job Throughput".
  3. Use this value as a more accurate parameter value for $VHDOptimizationThroughputMB in the script for "Hyper–V (VDI and virtualized backup) – Method 1" above.

Conclusion

The question "How big can I make my deduplicated volumes?" is deceptively simple as it requires understanding how much and how frequently the data on the volume changes and on the data access throughput rates of the disk storage subsystem. After either checking your existing deduplication configuration or when planning for a new system, you can use the methods described above to determine appropriate values for your expected data churn and storage subsystem performance and to then calculate the estimated maximum volume sizes.

Microsoft IFS PlugFest 27 - Calling Filter Driver Developers

$
0
0

We are pleased to inform you that IFS Plugfest 27has been scheduled. Here are some preliminary event details:

 

When

Monday, March 30th to Friday, April 3rd, 2015. The event begins at 9am and ends at 6pm each day, except for Friday when it ends at 3pm.

 

Where

Building 37, rooms 1701-1717, Microsoft Campus, Redmond, Washington.

 

Audience

ISVs (Independent Software Vendor) and developers writing file system filter drivers and/or network filter driver for Windows.

  

Cost

Free– There is no cost for this event.

 

Goal

  • Compatibility testing with Windows 10 and other file system as well as network filter drivers
  • Ensuring smooth upgrade from Windows 7 and above to Windows 10

 

Benefits

  • The opportunity to test products extensively for interoperability with other vendors' products and with Microsoft products. This has traditionally been a great way to understand interoperability scenarios and flush out any interoperability-related bugs.
  • Talks and informative sessions organized by the File System Filters & Network Filter team about topics that affect the filter driver community.
  • Opportunities to meet with file system team, network team, and cluster team at Microsoft and get answers to technical questions.

 

Registration

To register, fill in the Registration Form until January 30th 2015. We will follow up through email to confirm the registration. Due to constraints in space and resources at this Plugfest, ISVs are required to limit their participation to a maximum oftwo persons representing a product to be tested for interoperability issues. There will be no exceptions to this rule, so please plan for the event accordingly. Please look for messages from fsfcomm@microsoft.com and msrevents@microsoft.com for registration confirmation.

 

More information on Filter Drivers can be found here.

Microsoft File System Filter team.

------

Blog Author – Bhanu Prakash
Team - Windows File Systems and Filters Team

Work Folders for iOS – iPad App Release

$
0
0

 

We are happy to announce that an iPad app for Work Folders has been released into the Apple AppStore® and is available as a free download.


Overview

Work Folders is a Windows Server feature that allows individual employees to access their files securely from inside and outside the corporate environment. This app connects to it and enables file access on an Apple iPad. Work Folders enables this while allowing the organization’s IT department to fully secure that data.

This app for iPad features an intuitive UI, selective sync, end-to-end encryption and in-app file viewing.

 file browser
image 1 - file browser

 

Work Folders App on iOS - Features

  • Pin files for offline viewing.

  • Saves iPad storage space by fully showing all available files but locally storing and keeping in sync only the files you care about.

  • Files are stored encrypted at all times. On the wire and at rest on the device.

  • Access to the app is protected by an app passcode – keeping others out even if the iPad is left unlocked and unattended.

  • Allows for DIGEST and Active Directory Federation Services (ADFS) authentication mechanisms including multi factor authentication.

  • View your files right inside the app with a built-in viewer for many file types.

  • Open files in other apps that might be specialized to work with a certain file type.


 

 file viewer
image 2 - integrated file viewer

 

 settings
image 3 - settings

 

 file browser features
image 4 - some file browser features

 

 

Blogs and Links

If you’re interested in learning more about Work Folders, here are some great resources:

Nir Ben Zvi introduced Work Folders on Windows Server 2012 R2.

Work Folders for iPad help

-      Work Folders for Windows 7 SP1: Check out this post by Jian Yan on the File Cabinet blog.

-      Roiy Zysman posted a great list of Work Folders resources in this blog.

-      See this Q&A With Fabian Uhse, Program Manager for Work Folders in Windows Server 2012 R2

-      Also, check out these posts about how to setup a Work Folders test lab, certificate management, and tips on running Work Folders on Windows Failover Clusters.

-      Using Work Folders with IIS websites or the Windows Server Essentials Role (Resolving Port Conflicts)

   

 

All the goods

 

Introduction and Getting Started

-       Introducing Work Folders on Windows Server 2012 R2

-       Work Folders Overview on TechNet

-       Designing a Work Folders Implementation on TechNet

-       Deploying Work Folders on TechNet

-       Work folders FAQ (Targeted for Work Folders end users)

-       Work Folders Q&A

-       Work Folders Powershell Cmdlets

-       Work Folders Test Lab Deployment

-       Windows Storage Server 2012 R2 — Work Folders

-       Work Folders for Windows 7

 

 

Advanced Work Folders Deployment and Management

-       Work Folders interoperability with other file server technologies

-       Performance Considerations for Work Folders Deployments

-       Windows Server 2012 R2 – Resolving Port Conflict with IIS Websites and Work Folders

-       A new user attribute for Work Folders server Url

-       Work Folders Certificate Management

-       Work Folders on Clusters

-       Monitoring Windows Server 2012 R2 Work Folders Deployments.

-       Deploying Work Folders with AD FS and Web Application Proxy (WAP)

-       Deploying Windows Server 2012 R2 Work Folders in a Virtual Machine in Windows Azure

 

  

Videos

-       Windows Server Work Folders Overview: My Corporate Data on All of My Devices

-       Windows Server Work Folders – a Deep Dive into the New Windows Server Data Sync Solution

-       Work Folders on Channel 9

-       Work Folders iPad reveal – TechEd Europe 2014 (* in German)

Tell us what you REALLY think of Storage

$
0
0

Hi folks, Ned here again to lead with my chin:

Are you interested in shaping or influencing the future of storage solutions? The Windows Server team is looking for feedback to help define the next generation of storage. You do not have to use Microsoft storage solutions to respond to this survey or participate in feedback activities. Feedback activities may include surveys, phone interviews or focus groups. If you'd like to participate, please complete this brief 12-question survey.

Pow, right in the kissah!

- Ned "your punching bag "Pyle

Microsoft Ignite Storage and HA sessions: Come for the swag, stay for the sessions

$
0
0

Hi folks, Ned here again. If you have not heard, Microsoft Ignite is coming. If you’re thinking about joining us or already planning your must-see breakouts, here’s a list of the Windows Server vNext Storage and High Availability sessions presented by Microsoft. Truthbombs will be dropped, smackdown will be laid, surprises will be sprung, and announcements will be… eh… announced.

I’m not kidding about the loot, either. Some lucky individuals attending Storage Replica sessions will get limited edition “Kalamity Jake the Disaster Ape” stickers. Find me on the expo floor for my narrow supplies of impossible-to-get Ninja Cat Unicorn vinyl. And Jose has some of those ever-popular, full-size Windows Server 2012 R2 Private Cloud Storage and Virtualization posters.

image
Give your colleagues sticker envy with one of these on your laptop.

But enough about freebies. To the boothmobile!

Enabling New On-premises Scale-Out File Server with Direct-Attached Storage
Claus Joergensen and Michael Gray

Have you ever wanted to build a Scale-Out File Server using shared nothing Direct Attached Storage (DAS) hardware like SATA or NVMe disks? We cover advances in Microsoft Software Defined Storage that enables service providers to build Scale-Out File Servers using Storage Spaces with shared nothing DAS hardware.

Hyper-V Storage Performance with Storage Quality of Service
Jose Barreto and Senthil Rajaram

Windows Server vNext allows you to centrally monitor and manage performance for Hyper-V workloads using Scale-Out File Servers. Learn how to monitor storage performance from a customer, Hyper-V, and storage admin’s viewpoint, then author effective policies to deliver the performance your customers need.

Stretching Failover Clusters and Using Storage Replica in Windows Server vNext
Elden Christensen and Ned Pyle

In this session we discuss the deployment considerations of taking a Windows Server Failover Cluster and stretching across sites to achieve disaster recovery. This session discusses the networking, storage, and quorum model considerations. This session also discusses new enhancements coming in vNext to enable multi-site clusters.

Exploring Storage Replica in Windows Server vNext
Ned Pyle

Delivering business continuity involves more than just high availability, it means disaster preparedness. In this session, we discuss the new Storage Replica feature, including scenarios, architecture, requirements, and demos. Along with our new stretch cluster option, it also covers use of Storage Replica in cluster-to-cluster and non-clustered scenarios. And we have swag!

Deploying Highly Scalable Clusters with Dell Servers and Microsoft Storage
Claus Joergensen, Shai Ofek and SYAMA POLURI

In this session, learn how to deploy and manage highly scalable storage clusters using Microsoft next generation software defined Storage technology and Dell PowerEdge Servers/PowerVault JBODs. Review sizing guidelines, best practice recommendations, with exceptional $/IOPS and $/GB cost efficiency for best-in-class application performance.

Spaces-Based, Software-Defined Storage: Design and Configuration Best Practices
Allen Stewart and Joshua Adams

Going well beyond a feature walkthrough, this session delves into the nuances and complexities of the spaces-based SDS design. Starting with the hardware selection and continuing up the stack, this session empowers you to successfully design, deploy, and configure a storage solution based completely on Windows Server 2012 R2 and proven best practices. Examples galore!

Upgrading your private cloud to Windows Server 2012 R2 and beyond!
Ben Armstrong and Rob Hindman

We are moving fast, and want to help you to keep on top of the latest technology! This session covers the features and capabilities that will enable you to upgrade to Windows Server 2012 R2 and to Windows Server vNext with the least disruption. Understand cluster role migration, cross version live migration, rolling upgrades, and more.

Platform Vision & Strategy (4 of 7): Storage Overview
Jose Barreto

This is the fourth in a series of 5 datacenter platform overview sessions. We will walk through the Microsoft Software Defined Storage journey – what customers are telling us and how cloud cost/scale inflection points have impacted our investment decisions. We'll also explore how you can take advantage of consistent Azure scenarios for your storage options.

Managing and Securing the Fabric with Microsoft System Center Virtual Machine Manager
John Messec and John Patterson

Learn how to deploy, manage, upgrade, and secure large numbers of Hyper-V hosts and workloads using Virtual Machine Manager. Learn how to use mixed-mode clustering for updates, how to create a guarded fabric and run secure workloads on it, and more. We share operational information, best practices, and step-by-step guidance for some of the new scenarios.

Overview of the Microsoft Cloud Platform System
Vijay Tewari and Wassim Fayed

With the Microsoft Cloud Platform System, we are sharing our cloud design learnings from Azure datacenters, so customers can deploy and operate a cloud solution with Windows Server, Microsoft System Center and the Windows Azure Pack. This solution provides Infrastructure-as-a-Service and Platform-as-a-Service solutions for enterprises and service providers.

Architectural Deep Dive into the Microsoft Cloud Platform System
James Pinkerton and Spencer Shepler

The Microsoft Cloud Platform System has an automated framework that keeps the entire stamp current from software to firmware to drivers across all Windows Server, Microsoft System Center, Windows Azure Pack, SQL Server and OEM/IHV and prevent disruptions to tenant and management workloads. This session covers the complete architecture for CPS and deployment in your datacenter.

Operating the Microsoft Cloud Platform System
Efi Emmanouil and Justin Incarnato

Come learn about how customers will operate the Microsoft Cloud Platform System (CPS) – the Azure-consistent private cloud solution for enterprise or service provider environments. In this session, we show you the investments we have made to dramatically reduce the total cost of ownership of running the Microsoft Cloud Platform System (CPS).

Cloud Integrated Backup with Microsoft System Center and Azure Backup
Shreesh Dubey

This session will cover 4 topics: Leverage Azure for branch office backup and eliminating tapes for LTR, Backup for PaaS and IaaS workloads running in Azure, heterogeneous support in System Center Data Protection Manager such as protecting Oracle on Linux, SQL on VMware and then the latest features in SCDPM and Azure Backup.

[Ned: this includes Deduplication in Windows Server vNext.]

For all Ignite sessions, visit http://ignite.microsoft.com/Sessions

See you in Chicago!

- Ned “coat your laptop” Pyle

Work Folders for iOS - iPhone Release

$
0
0

 

We are happy to announce that an iPhone app for Work Folders has been released into the Apple AppStore® and is available as a free download.

- There also is a
Work Folders app for iPad.

 

Overview

Work Folders is a Windows Server feature that allows individual employees to access their files securely from inside and outside the corporate environment. This app connects to it and enables file access on an Apple iPhone and iPad. Work Folders enables this while allowing the organization’s IT department to fully secure that data.

This app for iOS features an intuitive UI, selective sync, end-to-end encryption, search and in-app file viewing.
It also integrates well with Windows Intune to fully complete the most important mobile device management scenarios around corporate data on mobile devices.

 

 

image 1 - file browser 

 

Work Folders App on iOS - Features

  • Pin files for offline viewing.

  • Saves iPhone storage space by fully showing all available files but locally storing and keeping in sync only the files you care about.

  • Files are stored encrypted at all times. On the wire and at rest on the device.

  • Access to the app is protected by an app passcode – keeping others out even if the iPhone is left unlocked and unattended.

  • Allows for DIGEST and Active Directory Federation Services (ADFS) authentication mechanisms including multi factor authentication.

  • Search for files and folders

  • View your files right inside the app with a built-in viewer for many file types.
  • Open files in other apps that might be specialized to work with a certain file type. 
    Further integration with Windows Intune allows for a pleasant user experience when a managed app is required to have an App-Pin.
    • IT admins can use Windows Intune to lock down this ability or the use copy/paste outside unmanaged apps.
  • In this case the Work Folders App-Pin experience is substituted with a consistent App-Pin across all managed apps on the device.
  • Remote wipe can now affect the Work Folders file content without having to remove the app itself or even reset the device.
    This drastically improves user experience in the event that such a command is issued from the Intune portal for this user/device.

 

 

 
image 2 - integrated file viewer

 

 

 
image 3 - settings 

 


image 4 - search 

 

Blogs and Links

If you’re interested in learning more about Work Folders, here are some great resources:

-      Nir Ben Zvi introduced Work Folders on Windows Server 2012 R2.

-      Work Folders for iOS help

-      Work Folders for Windows 7 SP1: Check out this post by Jian Yan on the File Cabinet blog.

-      Roiy Zysman posted a great list of Work Folders resources in this blog.

-      See this Q&A With Fabian Uhse, Program Manager for Work Folders in Windows Server 2012 R2

-      Also, check out these posts about how to setup a Work Folders test lab, certificate management,
       and tips on running Work Folders on Windows Failover Clusters.

-      Using Work Folders with IIS websites or the Windows Server Essentials Role (Resolving Port Conflicts)

   

 

All the goods

 

Introduction and Getting Started

-       Introducing Work Folders on Windows Server 2012 R2

-       Work Folders Overview on TechNet

-       Designing a Work Folders Implementation on TechNet

-       Deploying Work Folders on TechNet

-       Work folders FAQ (Targeted for Work Folders end users)

-       Work Folders Q&A

-       Work Folders Powershell Cmdlets

-       Work Folders Test Lab Deployment

-       Windows Storage Server 2012 R2 — Work Folders

-       Work Folders for Windows 7

 

 

Advanced Work Folders Deployment and Management

-       Work Folders interoperability with other file server technologies

-       Performance Considerations for Work Folders Deployments

-       Windows Server 2012 R2 – Resolving Port Conflict with IIS Websites and Work Folders

-       A new user attribute for Work Folders server Url

-       Work Folders Certificate Management

-       Work Folders on Clusters

-       Monitoring Windows Server 2012 R2 Work Folders Deployments.

-       Deploying Work Folders with AD FS and Web Application Proxy (WAP)

-       Deploying Windows Server 2012 R2 Work Folders in a Virtual Machine in Windows Azure

 

  

Videos

-       Windows Server Work Folders Overview: My Corporate Data on All of My Devices

-       Windows Server Work Folders – a Deep Dive into the New Windows Server Data Sync Solution

-       Work Folders on Channel 9

-       Work Folders iPad reveal – TechEd Europe 2014 (in German)

-    LATEST:  Work Folders on the "Edge Show"  (iPad + iPhone video, English)

Finally, a Progress Bar for DFSR Cloning

$
0
0

Heya folks, Ned here again. I came across a great post at Briantist.com today: a script to create a progress bar for DFSR Database Cloning. We never added a proper GUI for this operation and just relied on people reading the event log or waiting patiently – but Brian had other ideas:

Get Progress on DFS Replication Database Cloning Import

2015-04-23_9-01-29
Nifty

Slick and simple, it grovels the DFSR event log progress to build a nice visualization. It even works remotely from your client. Obviously, if cloning works as well as I claim, you won’t need a progress bar – it will be over too fast! Riiiiiiighht.

Great work, Brian. Check out more of his work or follow him on The Twitters.

- Ned “now that’s progress” Pyle


Data Deduplication in Windows Server Technical Preview 2

$
0
0

What’s New in Data Deduplication?

If I had to pick two words to sum up the major changes for Data Deduplication coming in the next version of Windows Server, they would be “scale” and “performance”. In this posting, I’ll explain what these changes are and provide some recommendations of what to evaluate in Windows Server Technical Preview 2.

In Windows Server 2016, we are making major investments to enable Data Deduplication (or “dedup” for short) to more effectively scale to handle larger amounts of data. For example, customers have been telling us that they are using dedup for such scenarios as backing up all the tenant VMs for hosting businesses, using from hundreds of terabytes to petabytes of data. For these cases, they want to use larger volumes and files while still getting the great space savings results they are currently getting from Windows Server.

Dedup Improvement #1: Use the volume size you need, up to 64TB

Dedup in Windows Server 2012 R2 optimizes data using a single-threaded job and I/O queue for each volume. It works great, but you do have to be careful not to make the volumes so big that the dedup processing can’t keep up with the rate of data changes, or “churn”. In a previous blog posting (Sizing Volumes for Data Deduplication in Windows Server), we explained in detail how to determine the right volume size for your workload and typically we have recommended to keep volume size <10TB.

That all changes in Windows Server 2016 with a full redesign of dedup optimization processing. We now run multiple threads in parallel using multiple I/O queues on a single volume, resulting in performance that was only possible before by dividing up your data into multiple, smaller volumes:

The result is that our volume guidance changes to a very simple statement: Use the volume size you need, up to 64TB.

Dedup Improvement #2: File sizes up to 1TB are good for dedup

While the current version of Windows Server supports the use of file sizes up to 1TB, files “approaching” this size are noted as “not good candidates” for dedup. The reasons have to do with how the current algorithms scale, where, for example, things like scanning for and inserting changes can slow down as the total data set increases. This has all been redesigned for Windows Server 2016 with the use of new stream map structures and improved partial file optimization, with the results being that you can go ahead and dedup files up to 1TB without worrying about them not being good candidates. These changes also improve overall optimization performance by the way, adding to the “performance” part of the story for Windows Server 2016.

Dedup Improvement #3: Virtualized backup is a new usage type

We announced support for the use of dedup with virtualized backup applications using Windows Server 2012 R2 at TechEd last November, and there has been a lot of customer interest in this scenario since then. We also published a TechNet article with the DPM Team (see Deduplicating DPM Storage) with a reference configuration that lists the specific dedup configuration settings to make the scenario optimal.

With a new release we can do more interesting things to simplify these kinds of deployments and in Windows Server 2016 we have combined all the dedup configuration settings into a new usage type called, as you might expect, “Backup”. This both simplifies the deployment as well as helps to “future proof” your configuration since any future setting changes can be included to be automatically changed by setting this usage type.

Suggestions for What to Check Out in Windows Server TP2

What should you try out in Windows Server TP2? Of course, we encourage you to evaluate overall the new version of dedup on your own workloads and datasets (and this applies to any deployment you may be using or interested in evaluating for dedup, including volumes for general file shares or for supporting a VDI deployment, as described in our previous blog article on Large Scale VDI Deployment).

But specifically for the new features, here are a couple of areas we think it would be great for you to try.

Volume Sizes

Try larger volume sizes, up to 64TB. This is especially interesting if you have wanted to use larger volumes in the past but were limited by the requirements for smaller volume sizes to keep up with optimization processing.

Basically the guidance for this evaluation is to only follow the first section of our previous blog article Sizing Volumes for Data Deduplication in Windows Server, “Checking Your Current Configuration”, which describes how to verify that dedup optimization is completing successfully on your volume. Use the volume size that works best for your overall storage configuration and verify that dedup is scaling as expected.

Virtualized Backup

In the TechNet article I mentioned above, Deduplicating DPM Storage, there are two changes you can make to the configuration guidance.

Change #1: Use the new “Backup” usage type to configure dedup

In the section “Plan and set up deduplicated volumes” and in the following section “Plan and set up the Windows File Server cluster”, replace all the dedup configuration commands with the single command to set the new “Backup” usage type.

Specifically, replace all these commands in the article:

# For each volume

Enable-DedupVolume -Volume <volume> -UsageType HyperV

Set-DedupVolume -Volume <volume> -MinimumFileAgeDays 0 -OptimizePartialFiles:$false -Volume <volume>

 

# For each cluster node

Set-ItemProperty -Path HKLM:\Cluster\Dedup -Name DeepGCInterval -Value 0xFFFFFFFF

Set-ItemProperty -Path HKLM:\Cluster\Dedup -Name HashIndexFullKeyReservationPercent -Value 70

Set-ItemProperty -Path HKLM:\Cluster\Dedup -Name EnablePriorityOptimization -Value 1

…with this one new command:

# For each volume

Enable-DedupVolume -Volume <volume> -UsageType Backup

Change #2: Use the volume size you need for the DPM backup data

In the article section “Plan and set up deduplicated volumes”, a volume size of 7.2TB is specified for the volumes containing the deduplicated VHDX files containing the DPM backup data. For evaluating Windows Server TP2, the guidance is to use the volume size you need, up to 64TB. Note that you still need to follow the other configuration guidance, e.g., for configuring Storage Spaces and NTFS. But go ahead and use larger volumes as needed, up to 64TB.

Conclusion

We think that these improvements to Data Deduplication coming in Windows Server 2016 and available for you to try out in Windows Server Technical Preview 2 will give you great results as you scale up your data sizes and deploy dedup with virtualized backup solutions.

And we would love to hear your feedback and results. Please send email to dedupfeedback@microsoft.com and let us know how your evaluation goes and, of course, any questions you may have.

Thanks!

 

 

 


Have a Windows Server itch? Come scratch it!

$
0
0

Heya folks, Ned here again. Do you have an idea for Windows Server? Do you want to vote on future product features and scenarios floated by your peers and Microsoft? Do you like free awesomeness? Good. Go here now, tomorrow, and forever:

http://windowsserver.uservoice.com

Every piece of feedback and every vote goes directly to the Windows Server product engineering teams, without varnish or middlemen. The good, bad, and the ugly. For those interested in Storage feedback - i.e. the kind of people who visit FileCab - a shortcut:

http://windowsserver.uservoice.com/forums/295056-storage

For the ground rules, check out this brief FAQ. For more info on how voting works in UserVoice, check out this explanation.

Now go exercise your franchise!

- Ned "founding father" Pyle

 

Get your free Storage Replica Kalamity Jake desktop and phone wallpapers

$
0
0

Heya folks, Ned here again. It's been a week since Microsoft Ignite 2015 and those Storage Replica stickers went fast. Hundreds of them, but that wasn't enough when you're surrounded by 23,000 IT professionals hungry for laptop ornamentation.

So if you were turned away or couldn't make the show this year, I put together some Kalamity Jake wallpapers. These come in desktop and phone orientations, in a variety of resolutions and color schemes, both with and without logos. Redistribute to anyone you like. If someone tries to charge you for one of these, laugh in their face - they are free. If you want other colors or resolutions, drop me a line (and wait!). And be sure to visit us next year at Ignite, when I will have a whole new series. Collect them all! Or something.

Download all wallpapers in one convenient ZIP

Here are a few small scale samples.

  

  

The quality is exactly what you'd expect at this price from a Windows Server PM. ;-)

Have a great weekend,

- Ned "maybe I should cut off one ear?" Pyle

Patching and servicing of Windows and Linux - survey and email contact

$
0
0

HI folks, Ned here again. We are studying customer patching pain points and behaviors within Linux and Windows Server environments across operating systems and applications. If you are a stakeholder in the patching/updating process for your company and would like to share your thoughts and feedback, please take a few minutes to fill out the following survey:

https://www.surveymonkey.com/r/YYZKBS3

If you want to give us direct and deep feedback, please email us at:

patchfeed@microsoft.com

Again, we are interested in feedback and experiences from both Windows Server administrators as well as Linux sysadmins.

We look forward to hearing from you,

- Ned "here comes the spam mail" Pyle

The Storage Replica Video Series: Stretch Cluster Automatic Site Failover in Windows Server 2016 Technical Preview 2

$
0
0

Hi folks, Ned here again. Storage Replica has many improvements in Windows Server 2016 TP2, including new scenarios like cluster-to-cluster replication, an improved cluster wizard, and vastly superior performance. You haven’t tried SR since October? That’s alright, not everyone has time to download ISOs, build labs, and deploy eval code, just to decide if they even want to explore a new scenario. To help, I’ve created a series of SR scenario videos. They are each a few minutes long and hit the high points so you can decide your evaluation plans (and kick someone else to the curb).

To start, I will demo a disaster striking your Hyper-V Stretch Cluster running a live VM workload. An SR-enabled stretch cluster behaves much like a normal single-site cluster; you still get automatic and planned failover, administration through the graphical cluster snap-in, and most cluster role support. The big change is your cluster nodes now live in different datacenters, providing disaster protection to your high availability solution through synchronous replication. Zap a datacenter and everything automatically moves to the alternate site with zero data loss. Sweet.

So with this cumbersome blog title and its unwieldy server name firmly in hand, let’s see some video. These babies are native 1080P, so blow them up to a full window, crank the volume, and watch the magic.


Guess what I hate? The sound of my own voice…

There you go, a cluster surviving the onslaught of Jake. I mean, surviving datacenter power loss. That’s just what the government wants you to think!

I have five more videos ready to go and plenty in the pipeline – keep dropping in or watch twitter for series updates. It’s true, I finally joined social media after a twenty-year refusal. I feel greasy.

As always, sling mud at us via srfeed@microsoft.com. If you want the world on your side, muster the proletariat on the Windows Server UserVoice storage page.

Tune in tomorrow for our new test tool!

Until next time,

-    Ned “Spielberg has nothing to fear” Pyle

Viewing all 268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>