Today, Microsoft has made the Microsoft iSCSI Software Target 3.3 publicly available to all users of Windows Server 2008 R2. The Microsoft iSCSI Software Target has been available for production use as part of Windows Storage Server since early 2007. It has also been available for development and test use by MSDN and TechNet subscribers starting in May 2009. However, until now, there was no way to use the Microsoft iSCSI Software Target in production on a regular server running Windows Server 2008 R2. This new download offers exactly that. Get all the details, including download instructions and FAQ at http://blogs.technet.com/b/josebda/archive/2011/04/04/microsoft-iscsi-software-target-3-3-for-windows-server-2008-r2-available-for-public-download.aspx
Microsoft iSCSI Software Target 3.3 for Windows Server 2008 R2 available for public download
FSCT test results detail the performance of Windows Server 2008 R2 File Server configurations - 23,000 users with 192 spindles
The File Server Capacity Tool (FSCT) is a free download from Microsoft that helps you determine the capacity of a specific file server configuration (running Windows or any operating system that implements the SMB or SMB2 protocols). It simulates a specific set of operations (the “Home Folders” workload) being executed by a large number of users against the file server, confirming the ability of that file server to perform the specified operations in a timely fashion. It makes it possible to verify, for instance, if a specific file server configuration can handle 10,000 users. In case you’re not familiar with FSCT’s “Home Folders Workload”, it simulates a standard user’s workload based on Microsoft Office, Windows Explorer, and command-line usage when the file server is the location of the user’s home directory. We frequently use FSCT internally at Microsoft. In fact, before being released publicly, the tool was used to verify if a specific change to the Windows code has any significant performance impact in a file server scenario. We continue use FSCT for that purpose today.
Recently, the File Server Team released a document with results from a series of FSCT tests. These tests were performed in order to quantify the file server performance difference between Windows Storage Server 2008 (based on Windows Server 2008) and Windows Server 2008 R2. It was also an exercise to analyze the capacity (in terms of FSCT “Home Folders” users) of some common File Server configurations using between 24 and 192 disks. The 192-spindle configuration was able to handle 23,000 FSCT users running the Home Folders workload. Check the blog post at http://blogs.technet.com/b/josebda/archive/2011/04/08/fsct-test-results-detail-the-performance-of-windows-server-2008-r2-file-server-configurations-23-000-users-with-192-spindles.aspx for further details and a link to the document in the Microsoft Download Center.
Using 4k sector and advanced format drives in Windows. HotFix and support info for Windows Server 2008 R2 and Windows 7
If you work with storage, you probably already heard about the “4K Sector Drives”, “Advanced Format Drives” and “512e drives”. These new “4K sector drives” abandon the traditional use of 512 bytes per sector in favor of a new structure that uses 4096 bytes. The migration to the new formats is eased by the use of 4K drives that simulate the old format, known as “512 Emulation Drives” or “512e Drives” or Advanced Format Drives”.
Native 4K sector drives are currently not supported with Windows. However, 512e drives (or Advanced Format Drives) are supported with recent versions of Windows, provided that you follow the guidance in the following support article: http://support.microsoft.com/kb/2510009. There are specific requirements to be met and specific details for different Microsoft applications like Hyper-V, SQL Server and Exchange Server.
For Windows 7 and Windows Server 2008 R2, the KB article above mentions the requirement to install a specific hotfix described at http://support.microsoft.com/kb/982018. Please note that most of this fix is part of Windows 7 Service Pack 1 (SP1) or Windows Server 2008 R2 SP1, except for updates to the FSUTIL tool.
For you developers, head on over to MSDN to read on the nitty gritty details of this storage transition, and how it may impact your applications. Details are published at http://msdn.microsoft.com/en-us/library/hh182553.aspx.
If you’re interested in these new 4K sector drives, you might also want to look at these other links:
- http://blogs.msdn.com/b/psssql/archive/2011/01/13/sql-server-new-drives-use-4k-sector-size.aspx
- http://www.zdnet.com/blog/storage/are-you-ready-for-4k-sector-drives/731
- http://en.wikipedia.org/wiki/Advanced_Format
Note: The updated version of FSUTIL, is available as a download from the support KB page and, since 4/26/2011, via Windows Update labeled as "Update for Windows 7 (KB982018)".
File Server Team sessions at TechEd 2011 this week
If you're attending TechEd 2011 this week, here are sessions from the File Server team:
WSV313 - Microsoft iSCSI Software Target 3.3 for Application Storage, Diskless Boot, and More!
Speaker(s): Jian (Jane) Yan
Tuesday, May 17 at 5:00 PM, room: B101
http://northamerica.msteched.com/topic/details/WSV313
WSV317 - Windows Server 2008 R2 File Services Consolidation: Technology Update
Speaker: Jose Barreto
Wednesday, 5/18 at 10:15am, room: Georgia Ballrm 3
http://northamerica.msteched.com/topic/details/WSV317
WSV317-R - Windows Server 2008 R2 File Services Consolidation: Technology Update (repeat)
Speaker: Jose Barreto
Thursday, 5/19 at 10:15am, Room: Georgia Ballrm 2
http://northamerica.msteched.com/topic/details/WSV317-R
WSV318 - Windows Storage Server 2008 R2 Technical Overview
Speakers: Joel Garcia, Scott M. Johnson
Wednesday, May 18 at 3:15 PM, room: B309
http://northamerica.msteched.com/topic/details/WSV318
WSV323 - Information Governance for Unstructured Data Using the Data Classification Toolkit for Windows Server 2008 R2
Speakers: Gunjan Jain, Nir Ben Zvi
Wednesday, May 18 at 10:15 AM, room: C206
http://northamerica.msteched.com/topic/details/WSV323
Also make sure to visit the Windows Server booth for File Services (WSV 13).
TechEd 2011 demo install step-by-step (Hyper-V, AD, DNS, iSCSI Target, File Server Cluster, SQL Server over SMB2)
We have a new blog post out that explains the demo setup used in the TechEd 2011 presentation about Windows Server 2008 R2 File Services Consolidation.
In the post, you get step-by-step instructions on how to reproduce the environment used in the presentation's demo. This is a great way to experiment with a fairly large set of Microsoft technologies, including:
- Windows Server 2008 R2
- Hyper-V
- Networking
- Domain Name Services (DNS)
- Active Directory Domain Services (AD-DS)
- iSCSI Software Target 3.3
- iSCSI Initiator
- File Server (SMB2)
- Failover Clustering (WSFC)
- SQL Server 2008 R2
This is a long post with dozens of screenshots and it packs a lot of information. Check it out at
Microsoft IT Uses File Classification Infrastructure to Help Secure Personally Identifiable Information
I'd like to bring to your attention a recently published Technical Case Study from Microsoft IT that showcases how Microsoft IT is using FCI.
Learn how Microsoft Information Technology (IT) used File Classification Infrastructure (FCI) to create a solution to automatically classify, manage, and protect sensitive data, including personally identifiable information and financial information. Using the new FCI-based solution, Microsoft IT can obtain file-level details about content sensitivity while reducing misclassification of personally
http://www.microsoft.com/downloads/en/details.aspx?FamilyID=bee97542-c6c6-45b9-88c4-3abfdbb92e38
Data Classification Toolkit for Windows Server 2008 R2 Beta Now Available
Consistently identify, classify, and protect data across all file servers in your organization |
The Solution Accelerators team is pleased to announce the Data Classification Toolkit for Windows Server 2008 R2 Beta.
Join the Beta Program (Windows Live ID required)
Designed to help reduce the cost and complexity of data compliance, the Data Classification Toolkit for Windows Server 2008 R2 helps organizations consistently identify, classify, and protect data across multiple file servers. Using out-of-the-box classification knowledge, this tool gives organizations visibility into how data is distributed across their file servers to help them apply the right policies, protect critical data, and identify potential data storage efficiencies
Want to learn more about IT Governance and Compliance? Visit the IT GRC home page, and check out all the free content the IT GRC team has to offer! |
Data Classification Toolkit for Windows Server 2008 R2-Now Available
Identify, classify, and protect data across targeted file servers in your organization |
The Solution Accelerators team is pleased to announce that the Data Classification Toolkit for Windows Server 2008 R2 is now available for download.
The Data Classification Toolkit for Windows Server 2008 R2 is designed to help enable an organization to identify, classify, and protect data on their file servers. The out-of-the-box classification and rule examples help organizations* build and deploy their policies to protect critical information in a cost-effective manner
New post on Windows Server “8” Beta Scale-Out File Server for SQL Server 2012 (with Step-by-step instructions)
We have a new post that covers Windows Server "8" Beta and File Servers. It provides step-by-step instructions for deploying an evaluation version of SQL Server 2012 (just released last week) using a Windows Server “8” Beta Scale-Out File Server Cluster. The goal of this post is to help your testing or learning of the product, so it uses only VMs in a single computer with 8GB of RAM. It includes instructions on how to install things from scratch, including Hyper-V, DNS, AD, iSCSI, File Server, Cluster and SQL Server. It includes both PowerShell and GUI instructions for most steps, and also quite a few screenshots.
Here’s an outline of the post, which is many pages long:
1. Introduction
1.1. Overview
1.2. Hardware
1.3. Software
1.4. Notes and disclaimers
2. Install Windows Server “8” Beta
2.1. Preparations
2.2. Install the OS
2.3. Rename the computer
2.4. Enable Remote Desktop
3. Configure the Hyper-V host
3.1. Install the Hyper-V role to the server
3.2. Create the VM switches
3.3. Rename the network adapters
3.4. Assign static IP addresses for the Hyper-V host
4. Create the base VM
4.1. Preparations
4.2. Create a Base VM
4.3. Install Windows Server “8” Beta on the VM
4.4. Sysprep the VM
4.5. Remove the base VM
5. Configure the 5 VMs
5.1. Create 5 new differencing VHDs using the Base VHD
5.2. Create 5 similarly configured VMs
5.3. Start the 5 VMs
5.4. Complete the mini-setup for the 5 VMs
5.5. Change the computer name for each VM
5.6. For each VM, configure the networks
5.7. Review VM name and network configuration
6. Configure DNS and Active Directory
6.1. Install DNS and Active Directory Domain Services
6.2. Configure Active Directory
6.3. Join the other VMs to the domain
6.4. Create the SQL Service account
7. Configure iSCSI
7.1. Add the iSCSI Software Target
7.2. Create the LUNs and Target
7.3. Configure the iSCSI Initiators
7.4. Configure the disks
8. Configure the File Server
8.1 Install the required roles and features
8.2. Validate the Failover Cluster Configuration
8.3. Create a Failover Cluster
8.4. Configure the Cluster Networks
8.5. Add data disks to Cluster Shared Volumes (CSV)
8.6. Create the Scale-Out File Server
8.7. Create the folders and shares
9. Configure the SQL Server
9.1. Mount the SQL Serve ISO file
9.2. Run SQL Server Setup
9.3. Create a database using the clustered file share
10. Verify SMB features
10.1. Verify that SMB Multichannel is working
10.2. Query the SMB Sessions and Open Files
10.3. Transparently move SQL Client between file server nodes
10.4. Survive the loss of a client NIC
11. Shut down, startup and install final notes
12. Conclusion
You can read the entire post at http://blogs.technet.com/b/josebda/archive/2012/03/15/windows-server-8-beta-scale-out-file-server-for-sql-server-2012-step-by-step-installation.aspx
New post on Windows Server Blog - Windows Server “8” – Taking Server Application Storage to Windows File Shares
Windows Server "8" Beta just became available and it includes a lot of new features in File and Storage Services. The Windows Server blog just released a post detailing a lot of the new File Server capabilities and scenarios included in the new release.
Here's an outline of the post:
- Background
- Enabling server application storage on file shares
- Continuous availability
- Performance
- Scalability
- Data protection
- Features overview
- SMB Transparent Failover
- SMB Multichannel
- SMB Direct
- SMB Scale-Out
- VSS for SMB File Shares
- SMB-specific Windows PowerShell cmdlets
- SMB Performance Counters
- Performance
- Deployment modes
- Single-node File Server
- Dual-node File Server
- Multi-node File Server
- Conclusion
We encourage you to visit the Windows Server Blog right now and read all the details. The post is at
http://blogs.technet.com/b/windowsserver/archive/2012/03/15/windows-server-8-taking-server-application-storage-to-windows-file-shares.aspx
Analyzing Storage Performance using the Windows Performance Analysis ToolKit (WPT)
Robert Smith (Platforms Field Engineer) has recently posted a new blog that cover one important area of Windows performance that few people understand well. In his blog post he talks about how to use Windows Performance Toolkit (WPT) in general and the XPerf tool in particular. Xperf.exe is the command line tool used to start, stop, and manage traces. He then goes on to outline how to use the toolkit to look into Storage and know where to start looking when you hit a performance issue.
Here's the outline of his post:
- Introduction
- Obtaining the WPT Tools
- More About the WPT
- Getting Started: Capturing Storage Performance Data
- Scenarios
- Considerations for Starting a Trace
- Stopping a Trace
- Trace Analysis
- How to perform Trace Analysis
- What to Look For
- What are We Doing Here?
- High Disk Service Times
- Storport Tracing (For Storport storage devices)
- High IO Times
- Conclusion
If you're interested in Performance (and specially Storage Performance) and you never used the WPT or XPerf.exe before, this is a must read.
Configuring Primary Computers for Folder Redirection and Roaming Profiles in Windows Server “8” Beta
1 Introduction
1.1 Overview of Primary Computer feature
In Windows Server “8” Beta, administrators can designate a set of computers, known as primary computers, for each domain user, which controls which computers use Folder Redirection, Roaming User Profiles, or both. Designating primary computers is a simple and powerful method to associate user data and settings with particular computers or devices, simplify administrator oversight, improve data security, and help protect user profiles from corruption.
There are four major benefits to designating primary computers for users:
- The administrator can specify which computers users can use to access their redirected data and settings. For example, the administrator can choose to roam user data and settings between a user’s desktop and laptop, and to not roam the information when that user logs on to any other computer, such as a conference room computer.
- Designating primary computers reduces the security and privacy risk of leaving residual personal or corporate data on computers where the user has logged on. For example, a general manager who logs on to an employee’s computer for temporary access does not leave behind any personal or corporate data.
- Primary computers enable the administrator to mitigate the risk of an improperly configured or otherwise corrupt profile, which could result from roaming between differently configured systems, such as between x86-based and x64-based computers.
- The amount of time required for a user’s first sign-in on a non-primary computer is faster because the user’s roaming user profile and/or redirected folders are not downloaded. Sign-out times for roaming user profile users on non-primary computers are also reduced, because changes to the user profile do not need to be uploaded to the file share.
1.2 Overview of this document
This post describes the steps I took to set up a user with Folder Redirection and assign primary computers, so that you can experiment with this new technology yourself. The post does not include details on how to set up a domain controller or a domain. The audience of this document is expected to have an existing file server, domain controller, and clients setup or be able to set these up independently.
2 Installation Steps
2.1 Prerequisites
You need only a single computer (the specs are provided below) and the ISO files for the Windows Server “8” Beta and Windows 8 Consumer Preview, both of which are available as free downloads.
- Windows Server “8” Beta ISO file
Download from http://technet.microsoft.com/en-us/evalcenter/hh670538.aspx - Windows 8 Consumer Preview ISO file
Download from http://windows.microsoft.com/en-US/windows-8/iso
You will need a computer that meets the following requirements:
- Meets the minimum system requirements for Windows Server “8” Beta and Hyper-V
- Has at least 4 GB of RAM
In my case, I am using a Lenovo W520 Laptop with 8GB of RAM and an Intel Core i7.
You need to provision virtual machines for:
- Domain Controller (Windows Server “8” Beta)
- File Server (Windows Server “8” Beta)
- Primary Client (Windows 8 Consumer Preview)
- Other (non-primary) Client ((Windows 8 Consumer Preview)
In my demo setup, I provisioned three virtual machines:
- One domain controller that also functions as a file server. I named this server PMDemo and named the domain dPMDemo.
- Two clients, which I named PMClient1 and PMClient2. Both clients are joined to the dPMDemo domain. PMClient1 will be designated as the demo user’s primary computer.
- I assigned 1.5GB RAM to each of the VMs. If you have less memory on your host computer, I would recommend provisioning enabling Dynamic Memory with a Startup RAM value of at least 1GB for the domain controller / file server and 1GB each for the two clients.
- All VMs are connected to the ‘External network’ virtual network switch that is connected to the physical network interface card (NIC) of the computer.
2.2 Setting up Folder Redirection
2.2.1 Create a file share for user data
To create a file share for user data, use the following procedure on the domain controller/file server.
- Create a folder named C:\Share.
- Right-click the folder you created, point to Share with and then click Specific people.
- Type Everyone, click Add, and then click Share.
Alternatively, you can add Authenticated Users or any security group with all users to which the Folder Redirection policy will apply as long as the users have Read/Write access to the file share.
2.2.2 Create a new user
To create a new user, use the following procedure on the domain controller.
- Open the Active Directory Users and Computers MMC snap-in.
- In the console tree, right-click Users, point to New and then click User.
- In the New Object – User dialog box, create a new user named Bob Smith.
- Assign a password, clear the User must change password at next logon check box, and then select the Password never expires check box.
2.2.3 Create a new group policy object
To create a new GPO for Folder Redirection and primary computer support, use the following procedure on the domain controller.
- Open the Group Policy Management MMC snap-in.
- In the console tree, right-click Group Policy Objects. Click New to create a new group policy object.
- In the Name box, type Folder Redirection and Primary Computer and click OK.
- In the Security Filtering section, remove Authenticated Users and target the GPO to user Bob Smith.
2.2.4 Configure Folder Redirection
To set up Folder Redirection for Bob Smith, use the following procedure.
- Right-click the Folder Redirection and Primary Computer GPO and then click Edit.
The Group Policy Management Editor opens.
- In the console tree, expand User Configuration, then Policies, Windows Settings, and then Folder Redirection.
- Right-click Documents, and then click Properties.
- Choose Basic – Redirect everyone’s folder to the same location from the Setting list.
- In the Root Path box, and specify the root path to the file share created in step 2.2.1 and then click OK. In my demo, the share is \\PMDemo\Share.
2.2.5 Link the GPO to your domain
To link the GPO to your domain, use the following procedure on your domain controller.
- In the Group Policy Management console, right-click the domain created for this demo (in my case, dPMDemo), and then click Link an Existing GPO.
- Click Folder Redirection and Primary Computer and then click OK.
2.2.6 Test the Folder Redirection setup
At this point, the Folder Redirection setup is complete. If you’d like to test it out, sign in as Bob Smith onPMClient1. Ensure that Folder Redirection successfully applies for Bob Smith, as shown in step 2.4.1 below.
It is possible that you may have to reapply group policy on the client computer in order for Folder Redirection to apply. To do so, sign in as Bob Smith, open a command prompt window and then type Gpupdate /force. After signing out and then signing back in, the Folder Redirection policy should apply.
2.3 Setting up primary computers
2.3.1 Designate a Primary Computer in Active Directory
2.3.1.1 Designate a primary computer by using Active Directory Administrative Center
To designate a primary computer in Active Directory Domain Services (AD DS), use the following procedure.
- Open Active Directory Administrative Center.
- In the console tree, under the domain name node (dPMDemo in my case), click Computers.
- To designate PMClient1 as Bob Smith’s primary computer, double click PMClient1, and then in the Extensions section, click the Attribute Editor tab.
- Double-click the distinguishedName attribute, right-click the value and then click Copy.
- In Active Directory Administrative Center, click Users, and then double-click Bob Smith. In the Extensions section, click the Attribute Editor tab.
- Double-click the msDS-Primary Computer attribute, paste the distinguished name of PMClient1 into the Value to Add box, and then click Add.
You can specify a list of computer names in the Value to Add box; each listed computer will be designated as a primary computer for the user.
- Click OK in the Multi-valued String Editor dialog and again in the Bob Smith window.PMClient1 is now configured in AD DS as a primary computer for Bob Smith.
2.3.1.2 Designate a primary computer by using Windows PowerShell
To use Windows Powershell to designate a primary computer in AD DS, use the following procedure.
- Open a Windows PowerShell window on the domain controller.
- To retrieve the computer properties, including the distinguished name, of the primary computer, type the following command:
PS C:\Users\Administrator> $computer=Get-ADComputer PMClient1
- To setup the user – primary computer partnership for user Bob Smith, type the following command:
PS C:\Users\Administrator> Set-ADUser bobsmith –Add @{‘msDS-PrimaryComputer’=”$computer”}
- To check if the partnership was correctly set up, type the following command:
PS C:\Users\Administrator> Get-ADUser bobsmith –Properties msDS-PrimaryComputer
During the setup, if you’d like to remove the user-primary computer partnership for user Bob Smith, type the following command:
PS C:\Users\Administrator> Set-ADUser bobsmith –Remove @{‘msDS-PrimaryComputer’=”$computer”}
2.3.2 Configure Folder Redirection policy to apply to primary computers
To enable primary computer support for Folder redirection, use the following procedure on the domain controller.
- In the Group Policy Management console, right-click Folder Redirection and Primary Computer and then click Edit.
Group Policy Management Editor appears. - In the console tree, expand User Configuration, then Policies, Administrative Templates, System, and then Folder Redirection.
- Double-click Redirect folders on primary computers only, click Enabled, and then click OK.
At this point, all steps to configure primary computers for the user are complete.
2.4 Testing primary computers
2.4.1 Sign on to a primary computer using the Bob Smith account
To test the experience of using a primary computer, use the following procedure on the PMClient1 computer.
- Use the Bob Smith account to sign on to PMClient1, which has been designated as Bob Smith’s primary computer.
- Open Windows Explorer, and under Libraries, expand Documents to show both My Documents and Public Documents.
- Click My Documents, and then click the Address Bar to show the path to the redirected folder. Also notice the State field in the Status bar, which indicates that the folder is enabled for Offline Files and that Bob Smith successfully got his Documents folder redirected and subsequently cached on his primary computer.
2.4.2 Sign on to a non-primary computer using the Bob Smith account
To test the experience of using a non-primary computer, use the following procedure on the PMClient2 computer.
- Use the Bob Smith account to sign on to PMClient2, which has not been designated as Bob Smith’s primary computer.
- Open Windows Explorer, and under Libraries, expand Documents to show both My Documents and Public Documents.
- Click My Documents, and then click the Address Bar to show the local path to the Documents folder. Also notice the State field in the Status bar is not present, indicating that the folder is not enabled for Offline Files, and that Bob Smith has successfully logged on to a non-primary computer and received a local profile.
Data Classification Toolkit for Windows Server "8" Beta - Now Available (Beta)
The Solution Accelerators team is pleased to announce the Data Classification Toolkit for Windows Server "8" Beta.
This powerful toolkit will help you address compliance concerns with new features in Windows Server "8" Beta
File Server team sessions at TechEd 2012 North America
If you're planning to attend the TechEd 2012 North America event in June, here are the scheduled sessions with speakers from the File Server team:
Code | Title | Speaker(s) | Day | Time | Track | Level |
WSV328 | The Path to Continuous Availability with Windows Server 2012 | Gene Chellis, Jim Pinkerton | Mon | 4:45 PM | Windows Server | 300 |
SIA207 | Windows Server 2012 Dynamic Access Control Overview | Gunjan Jain, Nir Ben Zvi | Mon | 4:45 PM | Security & Identity | 200 |
WSV308 | Standards Support and Interoperability in Windows Server 2012: Storage, Networking, and Management | Gene Chellis, Jeffrey Snover, See-Mong Tan, Wojtek Kozaczynski |
Tue | 3:15 PM | Windows Server | 300 |
VIR306 | Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V | Jose Barreto | Tue | 3:15 PM | Virtualization | 300 |
WSV314 | Windows Server 2012 NIC Teaming and SMB Multichannel Solutions | Don Stanwyck, Jose Barreto | Tue | 5:00 PM | Windows Server | 300 |
WSV334 | Windows Server 2012 File and Storage Services Management | Fabian Uhse, Mathew Dickson | Wed | 10:15 AM | Windows Server | 300 |
SIA316 | Windows Server 2012 Dynamic Access Control Best Practices and Case Study Deployments in Microsoft IT | Brian Puhl, Matthias Wollnik | Wed | 10:15 AM | Security & Identity | 300 |
WSV322 | Update Management in Windows Server 2012: Revealing Cluster-Aware Updating and the New Generation of WSUS | Erin Chapple, Mallikarjun Chadalapaka | Wed | 1:30 PM | Windows Server | 300 |
WSV303 | Windows Server 2012 High-Performance, Highly-Available Storage Using SMB | Claus Joergensen, Gene Chellis | Wed | 3:15 PM | Windows Server | 300 |
WSV330 | How to Increase SQL Availability and Performance Using Window Server 2012 SMB 3.0 Solutions | Claus Joergensen, Gunter Zink | Thu | 8:30 AM | Windows Server | 300 |
WSV410 | Continuously Available File Server: Under the Hood | Claus Joergensen | Thu | 10:15 AM | Windows Server | 400 |
WSV310 | Windows Server 2012: Cluster-in-a-Box, RDMA, and More | John Loveall, Spencer Shepler | Thu | 1:00 PM | Windows Server | 300 |
If you’re not registered, there’s still time. Visit the registration page for details.
If you’re already registered, get this list of sessions on the North America TechEd 2012 page, which includes the ability to add the sessions to your schedule with one click.
SMB 3 Security Enhancements in Windows Server 2012
Everything here also applies to Windows 8. These features were first available in the Windows Server “8” Beta & Windows 8 Consumer Preview releases. See this document for protocol level details of these features.
1 Encryption
SMB 3 in Windows Server 2012 adds the capability to make data transfers secure by encrypting data in-flight, to protect against tampering and eavesdropping attacks. The biggest benefit of using SMB Encryption over more general solutions (such as IPSec) is that there are no deployment requirements or costs beyond changing the SMB Server settings.
The encryption algorithm used is AES-CCM, which also provides data integrity validation (signing). See section 5 below for full details about SMB Encryption.
2 Signing
SMB 3 uses a newer algorithm for signing – AES-CMAC instead of the HMAC-SHA256 used by SMB 2.
Both AES-CCM and AES-CMAC can be dramatically accelerated on most modern CPUs with AES instruction support (see this link for general non-Microsoft information on CPU AES acceleration).
3 Secure Dialect Negotiation
SMB 3 includes a new capability to detect “man in the middle” attempts to downgrade the SMB 2/3 protocol “dialect” or capabilities that the client and server negotiate. When such manipulation is detected by either client or server, the connection will be disconnected and Event ID 1005 will be logged in the Microsoft-Windows-SmbServer/Operational event log. Secure Negotiate cannot detect/prevent downgrades from SMB 2 / 3 to SMB 1, and for this reason we strongly encourage users to disable the SMB 1 server if possible (see section 4 below). This is especially important for certain SMB Encryption deployment scenarios, as discussed in section 5 below.
4 If you don’t need SMB 1, turn it off!
Although the Windows CIFS/SMB 1 Server is a very mature codebase, we would still encourage users who have no need for it to turn it off on their servers, thereby reducing attack surface (SMB 1 supports over 100 protocol commands). The newer and separate SMB 2 server component supports SMB protocol versions 2 and higher, including SMB 3.
If all of your clients are Windows Vista and later, you probably don’t need SMB 1 enabled on your servers.
SMB 2 was first supported in Windows Vista & Windows Server 2008. Older clients such as Windows 98/ME, Windows 2000, Windows XP and Windows 2003 do not support SMB 2, and will not be able to access file or print shares if the SMB 1 server is disabled. Some non-Microsoft SMB clients may also be incapable of speaking SMB 2 (e.g. printers with “scan to share” functionality). Computer Browser functionality also requires SMB 1, but the Browser service is disabled by default on Windows Server 2008 R2 and later anyway.
You can discover whether any SMB clients are currently connected to your server using SMB 1 with the following PowerShell command:
Get-SmbSession | Select Dialect,ClientComputerName,ClientUserName | Where-Object {$_.Dialect –lt 2.00}
The SMB 1 server can be disabled with this command:
Set-SmbServerConfiguration –EnableSMB1Protocol $false
If a client connection is rejected because the SMB 1 server is disabled, Event ID 1001 will be logged in the Microsoft-Windows-SmbServer/Operational event log. The rejected client name/IP can be found in the event details.
5 Encryption Details
5.1 Configuration
Using Windows Server 2012, an administrator can enable SMB Encryption for the entire server, or just specific shares. Since there are no other deployment requirements for SMB Encryption, it is an extremely cost-effective way to protect data from snooping and tampering attacks. Administrators can very simply turn it on using either the File Server Manager, or using powershell, as shown below.
Using Powershell
To enable encryption for an individual share, run the following on the server:
Set-SmbShare –Name <sharename> -EncryptData $true
To create a new share with Encryption turned on:
New-SmbShare –Name <sharename> -Path <pathname> –EncryptData $true
To enable encryption for the entire server, run the following on the server:
Set-SmbServerConfiguration –EncryptData $true
Using File Server Manager
1. Open Server Manager and navigate to File and Storage Services
2. Click on Shares
3. Select the share you want to turn encryption on for, and right-click
4. Select “Properties” from the context menu.
5. In the Share Properties window, select “Settings”, and enable “Encrypt data access” (see graphic below)
6. Click OK to close
5.2 Deployment Considerations
By default, once SMB Encryption is turned on for a share or server, only SMB 3 clients will be allowed to access the affected shares. The reason for this restriction is to ensure that the administrator’s intent of safeguarding the data is maintained for all accesses. However there might be situations (for example, a transition period where mixed client OS versions will be in use) where an admin may want to allow unencrypted access for clients not supporting SMB 3. To enable that scenario, run the following powershell command:
Set-SmbServerConfiguration –RejectUnencryptedAccess $false
The Secure Negotiate capability described in section 3 does prevent a “man in the middle” from downgrading a connection from SMB 3 to SMB 2 (which would use unencrypted access); however it does not prevent downgrades to SMB 1 which would also result in unencrypted access.
For this reason, in order to guarantee that SMB 3 capable clients will always use encryption to access encrypted shares, the SMB 1 server must be disabled.
If the –RejectUnencryptedAccess setting is left at its default setting of $true then there is no concern, because only encryption capable SMB 3 clients will be allowed to access the shares (SMB1 clients will also be rejected).
5.3 Implementation Details
A few details important to note from implementation perspective:
1. SMB Encryption uses the AES-CCM algorithm to encrypt/decrypt the data. Details about key generation can be found in sections 3.1.4.2 and 3.2.5.3 of the protocol document linked at the top of this post. AES-CCM also provides data integrity validation (signing) for encrypted shares regardless of the SMB Signing settings. Of course, if you want to enable just signing without encryption, you can continue doing so as before. See a great blog about SMB Signing here for more details.
2. In a default configuration (no unencrypted access allowed to encrypted shares), if clients not supporting SMB 3 attempt to access an encrypted share, Event ID 1003 will be logged to the Microsoft-Windows-SmbServer/Operational event log and the client will receive an “access denied” error.
3. SMB Encryption and NTFS EFS are completely unrelated, and SMB Encryption does not require or depend on using EFS.
5.4 Deployment Scenarios
SMB Encryption should be considered for any scenario in which sensitive data needs to be protected from man-in-the-middle attacks. Here are a couple examples which are relevant:
1. In a traditional information worker scenario, a lot of sensitive data is being moved over the SMB protocol. SMB Encryption offers an end to end privacy and integrity guarantee between the file server and the client, irrespective of the networks traversed (e.g. WAN links maintained by 3rd party providers).
2. The new capabilities offered by SMB 3 in Windows Server 2012 enable file servers to provide continuously available storage for server applications (such as SQL or Hyper-V). This also offers an opportunity to protect that information from snooping attacks by simply enabling SMB Encryption. Compare this with the dedicated hardware solutions required for most SAN networks.
Windows Server 2012 Beta with SMB 3.0 – Demo at Interop shows SMB Direct at 5.8 Gbytes/sec over Mellanox ConnectX-3 network adapters
The Interop conference is happening this week in Las Vegas (see http://www.interop.com/lasvegas) and Mellanox is showcasing their high-speed ConnectX-3 network adapters during the event. They are showing an interesting setup with Windows Server 2012 Beta and SMB 3.0 that shows amazing remote file performance using SMB Direct (SMB over RDMA). The short story? 5.8 Gbytes per second from a single network port. Yes, that’s giga bytes, not giga bits. Roughly one DVD per second. Crazy, right?
Get all the details in the link below, including a comparison between Windows Server 2012 Beta with SMB 3.0 running with traditional non-RDMA Ethernet at 10Gbps, InfiniBand QDR at 32Gbos and InfiniBand FDR at 54Gbps:
http://blogs.technet.com/b/josebda/archive/2012/05/06/windows-server-2012-beta-with-smb-3-0-demo-at-interop-shows-smb-direct-at-5-8-gbytes-sec-over-mellanox-connectx-3-network-adapters.aspx
Starting with Cluster-Aware Updating: Self-Updating
In previous releases of Windows, the server updating tools (e.g. WSUS) did not factor in the fact that a group of servers could be members of a highly-available cluster. As failover clusters are all about high availability of services hosted on the cluster, one would almost never patch all cluster nodes at the same time. So patching a failover cluster usually meant a fair number of manual steps, scripting/tools, and juggling administrator’s attention across a number of clusters to successly update them all during a short monthly maintenance window. Addressing this was always the #1 ask for Windows Server 2012, during all the discusssions we had with customers in early days of release planning. Cluster-Aware Updating (CAU) is an exciting new feature that we have added in Windows Server 2012 that addresses precisely this gap. Once you have decided to try the CAU feature in Windows Server 2012 to update your failover cluster, you will very quickly apreciate its simplicity and power.
CAU allows you to update clustered servers with little or no loss in availability during the update process. During an Updating Run, CAU transparently puts each node of the cluster into node maintenance mode, temporarily fails over the “clustered roles” off it to other nodes, installs the updates and any dependent updates on the first node, performs a restart if necessary, brings the node back out of maintenance mode, fails back the original clustered roles back onto the node, and then proceeds to update the next node. CAU is cluster workload-agnostic, and it works great with Hyper-V, and a number of File Server workloads.
CAU can work in one of two modes:
1. Self-Updating: Once configured by you, CAU can run on a cluster node that it is meant to update. You would simply configure the updating schedule and let CAU update the cluster
2. Remote-Updating: CAU can run on a standalone Windows 8/Windows Server 2012 computer that is not a cluster node. You can then have CAU “connect” to any failover cluster using appropriate administration credentials, and update the cluster on demand
For an overview of the scenario, check out the CAU Scenario Overview.
For this blog post, I will focus on the first mode above: Self-Updating. The beauty of self-updating is that it lets you configure your failover cluster to be on “auto pilot” in terms of patching, and once set up, the cluster updates itself on the schedule you have defined in a way that it causes either no impact to service availability, or the least possible – depending on the types of workloads (e.g. Hyper-V cluster updating experience would be truly continuously available with Live Migration, with zero down time to Hyper-V VM users).
At a high level, there are three things you need to do to get the end-to-end scenario working and operating seamlessly with your existing patching infrastructure such as Windows Server Update Services (WSUS), on a Windows Server 2012 failover cluster:
1. Install CAU tools on the Windows Server 2012 (or Windows 8 Client) computer that you want to run it from.
2. Configure self-updating on the desired failover cluster
3. Confirm that the scenario works by previewing the updates and prompting a Self-Updating Run
Then you simply sit back and enjoy watching cluster self-updating in action!
Let’s start with the first step.
1. Install CAU tools
Installation of CAU is very easy: CAU tools are a part of Failover Clustering Tools. As Failover Clustering Tools are by default auto-selected for installation when the Failover Clustering feature is installed on a cluster node, your cluster nodes most likely already have CAU tools pre-installed. The net is that you can safely skip this step if you are running it from one of the cluster nodes. If you had of course explicitly chosen not to install Failover Clustering Tools on the cluster nodes, you will need to install them now with this step.
If you plan to run CAU from a computer different from the cluster nodes, that is still easy! Look at the following screen shot for how to install it from Server Manager on that computer:
Alternatively, you can also install CAU tools via PS cmdlet option. See this screen shot below:
As you can see I am installing the “RSAT-Clustering” Windows Feature, which installs the “ClusterAwareUpdating” PS module. And then I am sanity checking that the module was properly installed by listing all the CAU cmdlets.
The same cmdlets are included below in text for convenience:
Add-WindowsFeature RSAT-Clustering –verbose
Get-Command –Module ClusterAwareUpdating
To make sure that CAU GUI application was properly installed along with CAU PS cmdlets, you can sanity check that Server Manager is ready to launch CAU, as in the next screen shot. See the first item under “Tools” menu in the screen shot.
2. Configure self-updating on the desired failover cluster
You can configure self-updating for a cluster either via the CAU GUI application, or through CAU PS cmdlets. Let us look at the GUI approach first, so you can be familiar with the cmdlet options for usage later in the blog post.
You always start off by connecting to a cluster as in the following screen shot:
Look at the two highlighted areas in red in the above screen shot – you can type in the desired cluster name, and hit the “Connect” button right next to it. You of course can also click the down-arrow right next to the cluster name text box, which will show all the available cluster names, as known to local AD. This list is auto-populated for you everytime GUI starts up.
Then look at the Cluster Action circled in green in the previous screen shot – “Configure cluster self-updating options”. This is the action you choose to start the self-updating configuration process for your failover cluster. Once you click on this action, it pops up the “Getting Started” wizard page as shown in the next screen shot:
As the Getting Started text says, this wizard is smart enough to figure out that you do not have self-updating currently configured on the connected cluster, and automatically offer you the option to add it. Now, if you had self-updating already configured for your cluster, the wizard would give you three different options – to disable, re-enable and remove the self-updating functionality. So in effect, this wizard is your “one-stop shop” for all things related to self-updating configuration.
When you click the “Next” button, you see the screen as in the following screen shot:
Note that you must choose the red-circled “Add the CAU clustered role..” action, in order to get to the next wizard screen. CAU clustered role is the Self-Updating functionality for the cluster. Just in case you are wondering, the reason we call it the “CAU clustered role” on this screen is to make sure that you notice that it is a clustered role (aka “resource group”) that can failover and fail back to other cluster nodes. The nice aspect of this design is that CAU self-updating is a clustered service in its own right, that is highly available. It means that if the node that is acting as the Update Coordinator for a Self-Updating Run fails in the middle of the Run, the CAU Update Coordinator would simply failover to a new cluster node and continue from where it left off.
Once you hit “Next” on this screen, you will see the next wizard screen as in the screen shot below:
Most enterprises tend to have a monthly updating schedule, that generally tracks Microsoft monthly “Patch Tuesday” schedule. But the wizard also allows you to define either a daily or a weekly updating schedule if you prefer. In the screen shot shown, I am selecting 3AM on Third Tuesday of every month as my updating schedule, that is roughly a week after the monthly security bulletin notification from Microsoft. The next “Advanced Options” wizard screen shot is shown below:
You most likely do not have to bother about changing the options in the screen shot, as the defaults are fairly conservative and benign. However, you may change them if your data center processes need it. Don’t forget to read what the options mean though first! There’s a static help file hyper-linked from this page, or you can check out the Add-CauClusterRole cmdlet parameters for more and current guidance, whose names are intentionally identical to the advanced option names on this screen.
If you had selected the Microsoft.WindowsUpdatePlugin as the operational plug-in for the Self-Updating Run (I plan to come back to plug-ins in a future blog post, but if you are really curious, you can check out the public CAU plug-in API specification and the CAU plug-in sample code on MSDN), you will see a wizard screen specific to Windows Update, as in the screen shot below:
In this additional update options screen, I am choosing to have “Recommended” updates be automatically installed on all the cluster nodes just as the “Important” updates (BTW, this is the same terminology that Windows Update uses). But you can choose to just have CAU install Important updates alone.
That’s it! When you hit the “Next” button on the screen above, you will arrive at the final Confirmation page of the wizard, as in the next screen shot. You simply confirm all your previous selections, and click on the “Apply” button to complete the self-updating configuration process.
Since we have already talked about a number of foundational concepts – e.g. schedules, prestaging, advanced options, update types – in the preceding text, we can now jump directly to the Windows PowerShell version of the same now. Look at the screen shot below, Add-CauClusterRole cmdlet does what we need, and the parameter and argument names should all be self-explanatory.
The same cmdlet here again in text:
Add-CauClusterRole -ClusterName CAUClu8330-29 -Force -CauPluginName Microsoft.WindowsUpdatePlugin -MaxRetriesPerNode 3 -CauPluginArguments @{ 'IncludeRecommendedUpdates' = 'True' } -StartDate "5/7/2012 3:00:00 AM" -DaysOfWeek 4 -WeeksOfMonth @(3) –verbose
In fact, I admit I cheated a bit here…. I have simply copied and pasted the PS cmdlet string shown in the wizard confirmation screen GUI and added the -verbose parameter to it. Hopefully that also illustrates to you how well CAI GUI and the PS cmdlets are integrated here, transitioning from one to the other is super-easy.
In the next blog post, I will talk about the third and final step of confirming that the end-to-end scenario works, and the things to watch out for. Stay tuned!
Introduction to Data Deduplication in Windows Server 2012
Hi, this is Scott Johnson and I’m a Program Manager on the Windows File Server team. I’ve been at Microsoft for 17 years and I’ve seen a lot of cool technology in that time. Inside Windows Server 2012 we have included a pretty cool new feature called Data Deduplication that enables you to efficiently store, transfer and backup less data.
This is the result of an extensive collaboration with Microsoft Research and after two years of development and testing we now have state-of-the-art deduplication that uses variable-chunking and compression and it can be applied to your primary data. The feature is designed for industry standard hardware and can run on a very small server with as little as a single CPU, one SATA drive and 4GB of memory. Data Deduplication will scale nicely as you add multiple cores and additional memory. This team has some of the smartest people I have worked with at Microsoft and we are all very excited about this release.
Does Deduplication Matter?
Hard disk drives are getting bigger and cheaper every year, why would I need deduplication? Well, the problem is growth. Growth in data is exploding so much that IT departments everywhere will have some serious challenges fulfilling the demand. Check out the chart below where IDC has forecasted that we are beginning to experience massive storage growth. Can you imagine a world that consumes 90 million terabytes in one year? We are about 18 months away!
Source: IDC Worldwide File-Based Storage 2011-2015 Forecast:
Foundation Solutions for Content Delivery, Archiving and Big Data, doc #231910, December 2011
Welcome to Windows Server 2012!
This new Data Deduplication feature is a fresh approach. We just submitted a Large Scale Study and System Design paper on Primary Data Deduplication to USENIX to be discussed at the upcoming Annual Technical Conference in June.
Typical Savings:
We analyzed many terabytes of real data inside Microsoft to get estimates of the savings you should expect if you turned on deduplication for different types of data. We focused on the core deployment scenarios that we support, including libraries, deployment shares, file shares and user/group shares. The Data Analysis table below shows the typical savings we were able to get from each type:
Microsoft IT has been deploying Windows Server with deduplication for the last year and they reported some actual savings numbers. These numbers validate that our analysis of typical data is pretty accurate. In the Live Deployments table below we have three very popular server workloads at Microsoft including:
- A build lab server: These are servers that build a new version of Windows every day so that we can test it. The debug symbols it collects allows developers to investigate the exact line of code that corresponds to the machine code that a system is running. There are a lot of duplicates created since we only change a small amount of code on a given day. When teams release the same group of files under a new folder every day, there are a lot of similarities each day.
- Product release shares: There are internal servers at Microsoft that hold every product we’ve ever shipped, in every language. As you might expect, when you slice it up, 70% of the data is redundant and can be distilled down nicely.
- Group Shares: Group shares include regular file shares that a team might use for storing data and includes environments that use Folder Redirection to seamlessly redirect the path of a folder (like a Documents folder) to a central location.
Below is a screenshot from the new Server Manager ‘Volumes’ interface on of one of the build lab servers, notice how much data that we are saving on these 2TB volumes. The lab is saving over 6TB on each of these 2TB volumes and they’ve still got about 400GB free on each drive. These are some pretty fun numbers.
There is a clear return on investment that can be measured in dollars when using deduplication. The space savings are dramatic and the dollars-saved can be calculated pretty easily when you pay by the gigabyte. I’ve had many people say that they want Windows Server 2012 just for this feature. That it could enable them to delay purchases of new storage arrays.
Data Deduplication Characteristics:
1) Transparent and easy to use: Deduplication can be easily installed and enabled on selected data volumes in a few seconds. Applications and end users will not know that the data has been transformed on the disk and when a user requests a file, it will be transparently served up right away. The file system as a whole supports all of the NTFS semantics that you would expect. Some files are not processed by deduplication, such as files encrypted using the Encrypted File System (EFS), files that are smaller than 32KB or those that have Extended Attributes (EAs). In these cases, the interaction with the files is entirely through NTFS and the deduplication filter driver does not get involved. If a file has an alternate data stream, only the primary data stream will be deduplicated and the alternate stream will be left on the disk.
2) Designed for Primary Data: The feature can be installed on your primary data volumes without interfering with the server’s primary objective. Hot data (files that are being written to) will be passed over by deduplication until the file reaches a certain age. This way you can get optimal performance for active files and great savings on the rest of the files. Files that meet the deduplication criteria are referred to as “in-policy” files.
a. Post Processing: Deduplication is not in the write-path when new files come along. New files write directly to the NTFS volume and the files are evaluated by a file groveler on a regular schedule. The background processing mode checks for files that are eligible for deduplication every hour and you can add additional schedules if you need them.
b. File Age: Deduplication has a setting called MinimumFileAgeDays that controls how old a file should be before processing the file. The default setting is 5 days. This setting is configurable by the user and can be set to “0” to process files regardless of how old they are.
c. File Type and File Location Exclusions: You can tell the system not to process files of a specific type, like PNG files that already have great compression or compressed CAB files that may not benefit from deduplication. You can also tell the system not to process a certain folder.
3) Portability: A volume that is under deduplication control is an atomic unit. You can back up the volume and restore it to another server. You can rip it out of one Windows 2012 server and move it to another. Everything that is required to access your data is located on the drive. All of the deduplication settings are maintained on the volume and will be picked up by the deduplication filter when the volume is mounted. The only thing that is not retained on the volume are the schedule settings that are part of the task-scheduler engine. If you move the volume to a server that is not running the Data Deduplication feature, you will only be able to access the files that have not been deduplicated.
4) Focused on using low resources: The feature was built to automatically yield system resources to the primary server’s workload and back-off until resources are available again. Most people agree that their servers have a job to do and the storage is just facilitating their data requirements.
a. The chunk store’s hash index is designed to use low resources and reduce the read/write disk IOPS so that it can scale to large datasets and deliver high insert/lookup performance. The index footprint is extremely low at about 6 bytes of RAM per chunk and it uses temporary partitioning to support very high scale
c. Deduplication jobs will verify that there is enough memory to do the work and if not it will stop and try again at the next scheduled interval.
d. Administrators can schedule and run any of the deduplication jobs during off-peak hours or during idle time.
5) Sub-file chunking: Deduplication segments files into variable-sizes (32-128 kilobyte chunks) using a new algorithm developed in conjunction with Microsoft research. The chunking module splits a file into a sequence of chunks in a content dependent manner. The system uses a Rabin fingerprint-based sliding window hash on the data stream to identify chunk boundaries. The chunks have an average size of 64KB and they are compressed and placed into a chunk store located in a hidden folder at the root of the volume called the System Volume Information, or “SVI folder”. The normal file is replaced by a small reparse point, which has a pointer to a map of all the data streams and chunks required to “rehydrate” the file and serve it up when it is requested.
Imagine that you have a file that looks something like this to NTFS:
And you also have another file that has some of the same chunks:
After being processed, the files are now reparse points with metadata and links that point to where the file data is located in the chunk-store.
6) BranchCache™: Another benefit for Windows is that the sub-file chunking and indexing engine is shared with the BranchCache feature. When a Windows Server at the home office is running deduplication the data chunks are already indexed and are ready to be quickly sent over the WAN if needed. This saves a ton of WAN traffic to a branch office.
What about the data access impact?
Deduplication creates fragmentation for the files that are on your disk as chunks may end up being spread apart and this causes increases in seek time as the disk heads must move around more to gather all the required data. As each file is processed, the filter driver works to keep the sequence of unique chunks together, preserving on-disk locality, so it isn’t a completely random distribution. Deduplication also has a cache to avoid going to disk for repeat chunks. The file-system has another layer of caching that is leveraged for file access. If multiple users are accessing similar files at the same time, the access pattern will enable deduplication to speed things up for all of the users.
- There are no noticeable differences for opening an Office document. Users will never know that the underlying volume is running deduplication.
- When copying a single large file, we see end-to-end copy times that can be 1.5 times what it takes on a non-deduplicated volume.
- When copying multiple large files at the same time we have seen gains due to caching that can cause the copy time to be faster by up to 30%.
- Under our file-server load simulator (the File Server Capacity Tool) set to simulate 5000 users simultaneously accessing the system we only see about a 10% reduction in the number of users that can be supported over SMB 3.0.
- Data can be optimized at 20-35 MB/Sec within a single job, which comes out to about 100GB/hour for a single 2TB volume using a single CPU core and 1GB of free RAM. Multiple volumes can be processed in parallel if additional CPU, memory and disk resources are available.
Reliability and Risk Mitigations
Even with RAID and redundancy implemented in your system, data corruption risks exist due to various disk anomalies, controller errors, firmware bugs or even environmental factors, like radiation or disk vibrations. Deduplication raises the impact of a single chunk corruption since a popular chunk can be referenced by a large number of files. Imagine a chunk that is referenced by 1000 files is lost due to a sector error; you would instantly suffer a 1000 file loss.
- Backup Support: We have support for fully-optimized backup using the in-box Windows Server Backup tool and we have several major vendors working on adding support for optimized backup and un-optimized backup. We have a selective file restore API to enable backup applications to pull files out of an optimized backup.
- Reporting and Detection: Any time the deduplication filter notices a corruption it logs it in the event log, so it can be scrubbed. Checksum validation is done on all data and metadata when it is read and written. Deduplication will recognize when data that is being accessed has been corrupted, reducing silent corruptions.
- Redundancy: Extra copies of critical metadata are created automatically. Very popular data chunks receive entire duplicate copies whenever it is referenced 100 times. We call this area “the hotspot”, which is a collection of the most popular chunks.
- Repair: A weekly scrubbing job inspects the event log for logged corruptions and fixes the data chunks from alternate copies if they exist. There is also an optional deep scrub job available that will walk through the entire data set, looking for corruptions and it tries to fix them. When using a Storage Spaces disk pool that is mirrored, deduplication will reach over to the other side of the mirror and grab the good version. Otherwise, the data will have to be recovered from a backup. Deduplication will continually scan incoming chunks it encounters looking for the ones that can be used to fix a corruption.
It slices, it dices, and it cleans your floors!
Well, the Data Deduplication feature doesn’t do everything in this version. It is only available in certain Windows Server 2012 editions and has some limitations. Deduplication was built for NTFS data volumes and it does not support boot or system drives and cannot be used with Cluster Shared Volumes (CSV). We don’t support deduplicating live VMs or running SQL databases. See how to determine which volumes are candidates for deduplication on Technet.
Try out the Deduplication Data Evaluation Tool
To aid in the evaluation of datasets we created a portable evaluation tool. When the feature is installed, DDPEval.exe is installed to the \Windows\System32\ directory. This tool can be copied and run on Windows 7 or later systems to determine the expected savings that you would get if deduplication was enabled on a particular volume. DDPEval.exe supports local drives and also mapped or unmapped remote shares. You can run it against a remote share on your Windows NAS, or an EMC / NetApp NAS and compare the savings.
Summary:
I think that this new deduplication feature in Windows Server 2012 will be very popular. It is the kind of technology that people need and I can’t wait to see it in production deployments. I would love to see your reports at the bottom of this blog of how much hard disk space and money you saved. Just copy the output of this PowerShell command: PS> Get-DedupVolume
- 30-90%+ savings can be achieved with deduplication on most types of data. I have a 200GB drive that I keep throwing data at and now it has 1.7TB of data on it. It is easy to forget that it is a 200GB drive.
- Deduplication is easy to install and the default settings won’t let you shoot yourself in the foot.
- Deduplication works hard to detect, report and repair disk corruptions.
- You can experience faster file download times and reduced bandwidth consumption over a WAN through integration with BranchCache.
- Try the evaluation tool to see how much space you would save if you upgrade to Windows Server 2012!
Links:
Online Help: http://technet.microsoft.com/en-us/library/hh831602.aspx
PowerShell Cmdlets: http://technet.microsoft.com/en-us/library/hh848450.aspx
Introduction of iSCSI Target in Windows Server 2012
The iSCSI Target made its debut as a free download for Windows 2008 R2 in April 2011, since then, there were more than 60,000 downloads. That was the first step to make it available for everyone. Now in Windows Server 2012, no more downloads and separate installation; it comes as a build-in feature. This blog will provide step-by-step instructions to enable and configure iSCSI Target.
If you are not familiar with iSCSI Target, it allows your Windows Server to share block storage remotely. iSCSI leverages the Ethernet network and does not require any specialized hardware. In this release we have developed a brand new UI integrated with Server manager, along with 20+ cmdlets for easy management. The following references also provide additional examples/use cases:
Six uses for the Microsoft iSCSI Software Target
Using the Microsoft iSCSI Software Target with Hyper-V
Note: the instructions from the above references are for previous release, they are not applicable on Windows Server 2012. Please use the instructions provided in this blog instead.
Overview
There are two features related to iSCSI Target:
· The iSCSI Target Server is the server component which provides the block storage to initiators.
· The iSCSI Target Storage Provider (VDS and VSS) includes 2 components:
o VDS provider
o VSS provider
The diagram below shows how they relate to each other:
The providers are for remote Target management. The VDS provider is typically installed on a storage management server, and allows user to manage storage in a central location using VDS. VSS provider is involved when application running on initiator is taking application consistent snapshot. This storage provider works on Windows Server 2012, for version support matrix, please go to the FAQ section.
As it shown on the diagram, the iSCSI Target and Storage providers are enabled on different servers. This blog focuses on the iSCSI Target Server, and will provide instructions for enabling iSCSI Target Server. It is similar in UI to enable the Storage providers, just be sure to enabling it on the application server.
Terminology
iSCSI: it is an industry standard protocol allow sharing block storage over the Ethernet. The server shares the storage is called iSCSI Target. The server (machine) consumes the storage is called iSCSI initiator. Typically, the iSCSI initiator is an application server. For example, iSCSI Target provides storage to a SQL server, the SQL server will be the iSCSI initiator in this deployment.
Target: It is an object which allows the iSCSI initiator to make a connection. The Target keeps track of the initiators which are allowed to be connected to it. The Target also keeps track of the iSCSI virtual disks which are associated with it. Once the initiator establishes the connection to the Target, all the iSCSI virtual disks associated with the Target will be accessible by the initiator.
iSCSI Target Server: The server runs the iSCSI Target. It is also the iSCSI Target role name in Windows Server 2012.
iSCSI virtual disk: It also referred to as iSCSI LUN. It is the object which can be mounted by the iSCSI initiator. The iSCSI virtual disk is backed by the VHD file. For the VHD compatibility, refer to FAQs section below.
iSCSI connection: iSCSI initiator makes a connection to the iSCSI Target by logging on to a Target. There could be multiple Targets on the iSCSI Target Server, each Target can be accessed by a defined list of initiators. Multiple initiators can make connections to the same Target. However, this type of configuration is only supported with clustering. Because when multiple initiators connects to the same Target, all the initiators can read/write to the same set of iSCSI virtual disks, if there is no clustering (or equivalent process) to govern the disk access, corruption will occur. With Clustering, only one machine is allowed to access the iSCSI virtual disk at one time.
IQN: It is a unique identifier of the Target or Initiator. The Target IQN is shown when it is created on the Server. The initiator IQN can be found by typing a simple “iscsicli” cmd in the command window.
Loopback: There are cases where you want to run the initiator and Target on the same machine; it is referred as “loopback”. In Windows Server 2012, it is a supported configuration. In loopback configuration, you can provide the local machine name to the initiator for discovery, and it will list all the Targets which the initiator can connect to. Once connected, the iSCSI virtual disk will be presented to the local machine as a new disk mounted. There will be performance impact to the IO, since it will travel through the iSCSI initiator and Target software stack when comparing to other local IOs. One use case of this configuration is to have initiators writing data to the iSCSI virtual disk, then mount those disks on the Target server (using loopback) to check the data in read mode.
iSCSI Target management overview
Using Server Manager
iSCSI Target can be managed by the UI through Server Manager, or cmdlets. With Server Manager, a new iSCSI page will be displayed, as follow:
All the iSCSI virtual disk, Target management can be done through this page.
Note: iSCSI initiator UI management is done by the initiator control panel, which can be launched through Server Manager:
Using cmdlets
Cmdlets are grouped in modules. To get all the cmdlets in a module, you can type
Get-command –module <modulename>
iSCSI Target cmdlets: -module iSCSITarget
iSCSI initiator cmdlets: -module iSCSI
Volume, partition, disk, Storage pool and related cmdlets: -module storage
To use iSCSI Target end to end, cmdlets from all three modules will be used as illustrated in the examples below.
Enable iSCSI Target
Using Server Manager (UI)
iSCSI Target can be enabled using Add roles and features in the Server Manager:
1. Choose the Role-based or feature-based installation option
2. Select the server you want to enable iSCSI Target
3. Select the iSCSI Target Role:
To enable iSCSI Target feature, you should select the “iSCSI Target Server” feature.
4. Confirm the installation
Using cmdlets
Open the powershell cmdlet window, and run the following cmdlet:
Add-WindowsFeature FS-iSCSITarget-Server
Configuration
Create iSCSI LUN
To share storage, the first thing is to create an iSCSI LUN (aka. iSCSI virtual disk). The iSCSI virtual disk is backed by a VHD file.
Using Server Manager
Once the iSCSI Target role is enabled, Server Manager will have an iSCSI page:
The first wizard link is to create iSCSI Virtual Disk.
Since Server Manager allows for multi machine management, the UI is built to support that. If you have multiple servers in the management pool, you can create iSCSI Virtual disk on any servers with iSCSI Target enabled from one management UI. |
|
The UI also pre-populates the Path to “iSCSIVirtualDisks” by default. If you want to use a different one, go to the previous page, and select “Type a custom path”. If the path doesn’t exist, it will be created. |
|
Specify the iSCSI virtual disk size. |
|
Now the wizard will guide you to assign the virtual disk to an iSCSI Target. |
|
Give the Target a name. This name will be discovered by the iSCSI initiator, and use for the connection. |
|
This page allows you to specify the initiators which can access the virtual disk, by allowing the Target to be discovered by defined list of initiators. Clustering: You can configure multiple initiators to access the same virtual disk by adding more initiators to the list. To add the initiators, click on the Add button. |
|
The wizard is designed to simplify the assignment using the server name. By default, it is recommended to use IQN. The IQN is typically long, so the wizard will be able to resolve the computer name to IQN if the computer is Windows Server 2012. If the initiator is running previous Windows OS, you can also find the IQNs as described in the Terminology section. |
|
CHAP is an authentication mechanism defined by the iSCSI standard to secure access to the target. It allows the initiator to authenticate to the Target, and in reverse allowing the Target to authenticate against the initiator. Note: You cannot retrieve the CHAP information once it is set. If you lose the CHAP information, it will need to be set again. |
|
Last, the confirmation page. |
Once the wizard is completed, the iSCSI Virtual Disk will be shown on the iSCSI Page.
If you want to find all the iSCSI Virtual disks hosted on a volume, one simple way of doing this is to go to the Volume page, and select the volume. All the iSCSI virtual disks on that volume will be shown on the page:
Using Cmdlet
Same configuration can also be automated using the cmdlet.
1. LUN creation: New-IscsiVirtualDisk c:\test\1.vhd –size 1GB
First parameter is the VHD file path. The file name must not exist. If you want to load an existing VHD file, use Import-IscsiVirtualDisk command. The –size parameter specifies the size of the VHD file.
2. Target creation: New-IscsiServerTarget TestTarget2 –InitiatorIds “IQN: iqn.1991-05.com.Microsoft:VM1.contoso.com”
The first parameter is the Target name, and the –InitiatorIds stores the initiators which can connect to the Target.
3. Assign VHD to Target: Add-IscsiVirtualDiskTargetMapping TestTarget2 c:\test\1.vhd
Configure iSCSI initiator to logon the Target
Once the iSCSI Virtual disk is created and assigned, it is ready for the initiator to logon.
Typically, the iSCSI initiator and iSCSI Target are on different machines (physical or virtual). You will need to provide the iSCSI Target server IP or host name to the initiator, and the initiator will be able to do a discovery of the iSCSI Target. All the Targets which can be accessed will be presented to the initiator. If you cannot find the Target name, check
1. The Target Server IP or hostname which was given to the initiator
2. The initiator IQN which assigned to the Target object. It is very common to have a typo in this field. One trick to verify this, is to assign the Target with “IQN:*”, which means any initiator can access this Target. It is not a recommended practice, but a good troubleshooting technique.
3. Network connectivity between initiator and Target machine.
Using UI
Launch the iSCSI initiator Properties from Server Manager -> Tools
Go to the Discovery tab page and click on the Discover Portal. Add the IP address of the iSCSI Target Server. |
|
After discovery, all the Targets from the Server will be listed in the “Discovered Targets” box. Select the one you want to connect, and click on Connect. This will allow the initiator to connect to the target and access associated disks. |
|
Connect button will launch a “Connect to Target” dialog box. If target is not configure with CHAP, you can simply click “OK” to connect. To specify CHAP information, click on the Advanced button. Check the “Enable CHAP log on” box, and provide the CHAP information. Advanced configuration by specify IPs for iSCSI connection: If you want to dedicate iSCSI traffic to a specific set of the NICs, you can specify that in the “Connect using”. By default, any IPs can be used for iSCSI connection. Note if a specific IP is configured, and the IP address changed due to DHCP, the iSCSI initiator will not be able to reconnect after reboot. You will need to change the IP on this page, then connect. |
|
Connection established. |
Using Cmdlet
By default iSCSI initiator service is not started, the cmdlet will not work. If you launch the iSCSI initiator from the control panel, it will prompt for service start, as well as setting the service to start automatically.
For the equivalent using the cmdlet, you need to run
Start-Service msiscsi Set-Service msiscsi –StartupType “Automatic”
1. Specify the iSCSI Target Server name:
New-IscsiTargetPortal –TargetPortalAddress Netboot-1
This is similar to the discovery in the UI.
2. Get the available Targets (this is optional):
Get-IscsiTarget
3. Connect:
Connect-IscsiTarget –NodeAddress “iqn.1991-05.com.microsoft:netboot-1-nettarget-target”
If you want to connect all the Targets, you can also type:
Get-IscsiTarget | Connect-IscsiTarget
4. Register the Target as Favorite target, so that, it will reconnect upon initiator machine reboot.
Register-IscsiSession -SessionIdentifier "fffffa8004146020-4000013700000007"
You can get the sessionIdentifier from output of Connect-IscsiTarget, or Get-IscsiSession
Create new volume
Once the connection is established, the iSCSI virtual disk will be presented to the initiator as a disk. By default, this disk will be offline,. For typical usage, you want to create a volume, format the volume and assign with a drive letter so it can be used just like a local hard disk.
Using Server Manager
You can right click on the disk to bring it online, but it is not necessary. If you run the new volume wizard, it will be brought online automatically.
From Server Manager->File and Storage Services->Volumes->Disks page, check the disk 2 is in offline mode:
Launch the New Volume Wizard, from
Select the disk. Disk 2 is the offline iSCSI virtual disk. |
|
UI will bring the disk online, and initialize it to GPT. GPT is preferred, for more information, see here. If you have specific reasons creating MBR partition, you will need to use the cmdlet. |
|
Specifies the volume size. |
|
Assign drive letter |
|
You can create either NTFS or ReFS volume. For more information about ReFS, please see the link |
|
Confirmation page to create the new volume |
Using cmdlet
The following cmdlets are provided by the Storage module:
1. Check if the initiator can see the disk: Get-disk
2. Bring disk 3 online:
Set-disk –number 3 –IsOffline 0
3. Make disk 3 writable:
Set-disk –number 3 –isReadOnly 0
4. Initialize the disk 3:
Initialize-Disk -Number 3 -PartitionStyle MBR
5. Create a partition on disk 3 (To avoid a format volume popup in Windows Explorer, let’s not assign a drive letter at this time. We will do that after the volume is formatted)
New-Partition -DiskNumber 3 -UseMaximumSize -AssignDriveLetter:$False
6. Format volume:
Get-Partition –DiskNumber 3 | Format-Volume
7. Assign the drive letter now:
Get-Partition –DiskNumber 3 | Add-PartitionAccessPath –AssignDriveLetter:$true
FAQs
If you have used previous release of the iSCSI Target, the most noticeable change in Window Server 2012 is the user experience. Some common questions are:
1. Installing the web download of iSCSI Target for Windows Server 2008 R2 on Windows Server 2012.
The installation might succeed, but you won’t be able to configure it. You need to uninstall the download, and enable the inbox iSCSI Target.
2. Trying to manage the iSCSI Target with the MMC snapin
The new UI is integrated with Server Manager. Once the feature is enabled, you can manage the iSCSI Target from the “iSCSI” tab page. Server Manager\File and Storage Services\iSCSI
3. How to get all the cmdlet for iSCSI Target?
Type “get-command –module iscsiTarget” . The list shows all the cmdlets to manage the iSCSI Target.
4. Running the cmdlet scripts developed using the previous release
Although most of the cmdlets do work, there are changes to the parameters which may not be compatible. If you run into issues with the cmdlets developed from the previous release, please run the get-help cmdname to verify the parameter settings.
5. Running the WMI scripts developed using the previous release
Although most of the WMI classes are unchanged, some changes are not backward compatible. If you run into issues with the cmdlets developed from the previous release, please check the WMI classes and its parameters.
6. SMI-S support
iSCSI Target doesn’t have the SMI-S support in Windows Server 2012.
7. VHD compatibility
iSCSI virtual disk stores data in a VHD file. This VHD file is compatible with Hyper-V, i.e. you can load this VHD file using either iSCSI or Hyper-V. Hyper-V in Windows Server 2012 has introduced a new virtual hard disk format VHDx, which is not supported by iSCSI Target. Refer the table below for more details:
8. Cmdlet help
If you have any questions about the cmdlet, type “get-help cmdletname” to learn the usage. Before you can use get-help, you need to run “Update-help –module iscsTarget”, this allows the help content to be downloaded to your machine. Of course, this also implies you will need internet connectivity to get the content. This is a new publishing model of the help content, which allows for dynamic update of the content.
9. Storage Provider and iSCSI Target Version interop matrix
iSCSI Target has made a few releases in the past, below shows the version numbers and the supported OS it runs on.
iSCSI Target 3.2 <-> Windows Storage Server 2008
iSCSI Target 3.3 <-> Windows Storage Server 2008 R2 and Windows Server 2008 R2
iSCSI Target (build-in) <-> Windows Server 2012
For each Target release, there is a corresponding storage provider package, which allows the remote management. The table below shows the interop matrix.
Note:
1: Storage provider 3.3 on Server 2012 can manage iSCSI Target 3.2. This has been tested.
2: the 2012 Downlevel storage provider is a web download, which is planned to be released for RTM.
10. Does iSCSI Target support Storage spaces?
Storage space is a new feature in Windows 8 and Windows Server 2012, which provides storage availability and resiliency with commodity hardware. You can find more information about the Storage Spaces feature here. Hosting iSCSI virtual disks on Storage Spaces is supported. Using iSCSI LUNs in a Storage Spaces pool is not supported. Below is a topology diagram to illustrate the two scenarios:
Supported setup |
|
Not supported setup |
Conclusion
I hope this helps you get started using the iSCSI Target in Windows Server 2012, or make a smoother transition from the previous user experience. This blog only covers the most basic configurations. If you have questions not covered, please raise it in the comments, so I can address it with upcoming postings.
Dynamic Access Control intro on Windows Server blog
Hello all, we just published a new post for Dynamic Access Control on the Windows Server Blog:
Here’s an excerpt:
These focus areas were then translated to a set of Windows capabilities that enable data compliance in partner and Windows-based solutions.
- Add the ability to configure Central Access and Audit Policies in Active Directory. These policies are based on conditional expressions that take into account the following so that organizations can translate business requirements to efficient policy enforcement and considerably reduce the number of security groups needed for access control:
- Who the user is
- What device they are using, and
- What data is being accessed
- Integrate claims into Windows authentication (Kerberos) so that users and devices can be described not only by the security groups they belong to, but also by claims such as: “User is from the Finance department” and “User’s security clearance is High”
- Enhance the File Classification Infrastructure to allow business owners and users to identify (tag) their data so that IT administrators are able to target policies based on this tagging. This ability works in parallel with the ability of the File Classification Infrastructure to automatically classify files based on content or any other characteristics
- Integrate Rights Management Services to automatically protect (encrypt) sensitive information on servers so that even when the information leaves the server, it is still protected.
If you are looking for more depth and “how it works”, check out the whitepaper published by Mike Stephens’ :Understand and Troubleshoot Dynamic Access Control in Windows Server "8" Beta
Nir Ben-Zvi