Quantcast
Channel: Storage at Microsoft
Viewing all 268 articles
Browse latest View live

Windows Server 2016 NTFS sparse file/Data Deduplication users: please install KB4025334

$
0
0

Hi folks,

KB4025334 prevents a critical data corruption issue with NTFS sparse files in Windows Server 2016. This helps avoid data corruptions that may occur when using Data Deduplication in Windows Server 2016, although all applications and Windows components that use sparse files on NTFS benefit from applying this update. Installation of this KB helps avoid any new or further corruptions for Data Deduplication users on Windows Server 2016. This does not help recover existing corruptions that may have already happened. This is because NTFS incorrectly removes in-use clusters from the file and there is no ability to identify what clusters were incorrectly removed after the fact. Although KB4025334 is an optional update, we strongly recommend that all NTFS users, especially those using Data Deduplication, install this update as soon as possible. This fix will become mandatory in the “Patch Tuesday” release for August 2017.

For Data Deduplication users, this data corruption is particularly hard to notice as it is a so called “silent” corruption – it cannot be detected by the weekly Dedup integrity scrubbing job. Therefore, KB4025334 also includes an update to chkdsk to help identify which files are corrupted. Affected files can be identified using chkdsk with the following steps:

  1. Install KB4025334 on your server from the Microsoft Update Catalog and reboot. If you are running a Failover Cluster, this patch will need to be applied to all nodes in the cluster.
  2. Run chkdsk in readonly mode (this is the default mode for chkdsk)
  3. For potentially corrupted files, chkdsk will report something like the following
    The total allocated size in attribute record (128, "") of file 20000000000f3 is incorrect.

    where 20000000000f3 is the file id. Note all affected file ids.
  4. Use fsutil to look up the name of the file by its file id. This should look like the following:

    E:\myfolder> fsutil file queryfilenamebyid e:\ 0x20000000000f3
    A random link name to this file is [file://%3f/E:/myfolder/TEST.0]\\?\E:\myfolder\TEST.0

    where E:\myfolder\TEST.0 is the affected file.

We’re very sorry for the inconvenience this issue has caused. Please don’t hesitate to reach out in the comment section below if you have any additional questions about KB4025334, and we’ll be happy to answer.


Understanding SSD endurance: drive writes per day (DWPD), terabytes written (TBW), and the minimum recommended for Storage Spaces Direct

$
0
0

Hi! I’m Cosmos. Follow me on Twitter @cosmosdarwin.

Background

Storage Spaces Direct in Windows Server 2016 features a built-in, persistent, read and write cache to maximize storage performance. You can read all about it at Understanding the cache in Storage Spaces Direct. In all-flash deployments, NVMe drives typically cache for SATA/SAS SSDs; in hybrid deployments, NVMe or SATA/SAS SSDs cache for HDDs.

The built-in, persistent, read and write cache in Storage Spaces Direct.

The built-in, persistent, read and write cache in Storage Spaces Direct.

In any case, the cache drives will serve the overwhelming majority of IO, including 100% of writes. This is essential to delivering the unrivaled performance of Storage Spaces Direct, whether you measure that in millions of IOPS, Tb/s of IO throughput, or consistent sub-millisecond latency.

But nothing is free: these cache drives are liable to wear out quickly.

Review: What is flash wear

Solid-state drives today are almost universally comprised of NAND flash, which wears out with use. Each flash memory cell can only be written so many times before it becomes unreliable. (There are numerous great write-ups online that cover all the gory details – including on Wikipedia.)

You can watch this happen in Windows by looking at the Wear reliability counter in PowerShell:

PS C:\> Get-PhysicalDisk | Get-StorageReliabilityCounter | Select Wear

Here’s the output from my laptop – my SSD is about 5% worn out after two years.

Screenshot showing Wear of an SSD in Windows PowerShell.

Screenshot showing Wear of an SSD in Windows PowerShell.

Note: Not all drives accurately report this value to Windows. In some cases, the counter may be blank. Check with your manufacturer to see if they have proprietary tooling you can use to retrieve this value.

Generally, reads do not wear out NAND flash.

 Quantifying flash endurance

Measuring wear is one thing, but how can we predict the longevity of an SSD?

Flash “endurance” is commonly measured in two ways:

  • Drive Writes Per Day (DWPD)
  • Terabytes Written (TBW)

Both approaches are based on the manufacturer’s warranty period for the drive, its so-called “lifetime”.

 Drive Writes Per Day (DWPD)

Drive Writes Per Day (DWPD) measures how many times you could overwrite the drive’s entire size each day of its life. For example, suppose your drive is 200 GB and its warranty period is 5 years. If its DWPD is 1, that means you can write 200 GB (its size, one time) into it every single day for the next five years.

If you multiply that out, that’s 200 GB per day × 365 days/year × 5 years = 365 TB of cumulative writes before you may need to replace it.

If its DWPD was 10 instead of 1, that would mean you can write 10 × 200 GB = 2 TB (its size, ten times) into it every day. Correspondingly, that’s 3,650 TB = 3.65 PB of cumulative writes over 5 years.

 Terabytes Written (TBW)

Terabytes Written (TBW) directly measures how much you can write cumulatively into the drive over its lifetime. Essentially, it just includes the multiplication we did above in the measurement itself.

For example, if your drive is rated for 365 TBW, that means you can write 365 TB into it before you may need to replace it.

If its warranty period is 5 years, that works out to 365 TB ÷ (5 years × 365 days/year) = 200 GB of writes per day. If your drive was 200 GB in size, that’s equivalent to 1 DWPD. Correspondingly, if your drive was rated for 3.65 PBW = 3,650 TBW, that works out to 2 TB of writes per day, or 10 DWPD.

As you can see, if you know the drive’s size and warranty period, you can always get from DWPD to TBW or vice-versa with some simple multiplications or divisions. The two measurements are really very similar.

 What’s the difference?

The only real difference is that DWPD depends on the drive’s size whereas TBW does not.

For example, consider an SSD which can take 1,000 TB of writes over its 5-year lifetime.

Suppose the SSD is 200 GB:

              1,000 TB ÷ (5 years × 365 days/year × 200 GB) = 2.74 DWPD

Now suppose the SSD is 400 GB:

              1,000 TB ÷ (5 years × 365 days/year × 400 GB) = 1.37 DWPD

The resulting DWPD is different! What does that mean?

On the one hand, the larger 400 GB drive can do the exact same cumulative writes over its lifetime as the smaller 200 GB drive. Looking at TBW, this is very clear – both drives are rated for 1,000 TBW. But looking at DWPD, the larger drive appears to have just half the endurance! You might argue that because under the same workload, it would perform “the same”, using TBW is better.

On the other hand, you might argue that the 400 GB drive can provide storage for more workload because it is larger, and therefore its 1,000 TBW spreads more thinly, and it really does have just half the endurance! By this reasoning, using DWPD is better.

The bottom line

You can use the measurement you prefer. It is almost universal to see both TBW and DWPD appear on drive spec sheets today. Depending on your assumptions, there is a compelling case for either.

Recommendation for Storage Spaces Direct

Our minimum recommendation for Storage Spaces Direct is listed on the Hardware requirements page.

As of mid-2017, for cache drives:

  • If you choose to measure in DWPD, we recommend 3 or more.
  • If you choose to measure in TBW, we recommend 4 TBW per day of lifetime. Spec sheets often provide TBW cumulatively, which you’ll need to divide by its lifetime. For example, if your drive has a warranty period of 5 years, then 4 TB × 365 days/year × 5 years = 7,300 TBW = 7.3 PBW total.

Often, one of these measurements will work out to be slightly less strict than the other.

You may use whichever measurement you prefer.

There is no minimum recommendation for capacity drives.

Addendum: Write amplification

You may be tempted to reason about endurance from IOPS numbers, if you know them. For example, if your workload generates (on average) 100,000 IOPS which are (on average) 4 KiB each of which (on average) 30% are writes, you may think:

              100,000 × 30% × 4 KiB = 120 MB/s of writes

              120 MB/s × 60 secs/min × 60 mins/hour × 24 hours = approx. 10 TBW/day

If you have four servers with two cache drives each, that’s:

              10 TBW/day ÷ (8 total cache drives) = approx. 1.25 TBW/day per drive

Interesting! Less than 4 TBW/day!

Unfortunately, this is flawed math because it does not account for write amplification.

Write amplification is when one write (at the user or application layer) becomes multiple writes (at the physical device layer). Write amplification is inevitable in any storage system that guarantees resiliency and/or crash consistency. The most blatant example in Storage Spaces Direct is three-way mirror: it writes everything three times, to three different drives.

There are other sources of write amplification too: repair jobs generate additional IO; data deduplication generates additional IO; the filesystem, and many other components, generate additional IO by persisting their metadata and log structures; etc. In fact, the drive itself generates write amplification from internal activities such as garbage collection! (If you’re interested, check out the JESD218 standard methodology for how to factor this into endurance calculations.)

This is all necessary and good, but it makes it difficult to derive drive-level IO activity at the bottom of the stack from application-level IO activity at the top of the stack in any consistent way. That’s why, based on our experience, we publish the minimum DWPD and TBW recommendation.

Let us know what you think! 😊

Storage Spaces Direct with Cavium FastLinQ® 41000

$
0
0

Hello, Claus here again. I am very excited about how the RDMA networking landscape is evolving. We took RDMA mainstream in Windows Server 2012 when we introduced SMB Direct and even more so in Windows Server 2016 where Storage Spaces Direct is leveraging SMB Direct for east-west traffic.

More partners than ever offer RDMA enabled network adapters. Most partners focus on either iWARP or RoCE. In this post, we are taking a closer look at Microsoft SDDC-Premium certified Cavium FastLinQ® 41000 RDMA adapter, which comes in 10G, 25G, 40G or even 50G versions. The FastLinQ® NIC is a unique NIC, in that it supports both iWARP and RoCE, and can do both at the same time. This provides great flexibility for customer as they can deploy the RDMA technology of their choice, or they can connect both Hyper-V hosts with RoCE adapters and Hyper-V hosts with iWARP adapters to the same Storage Spaces Direct cluster equipped with FastLinQ® 41000 NICs.

 

Figure 1 Cavium FastLinQ® 41000

We use a 4-node cluster, each node configured with the following hardware:

  • DellEMC PowerEdge R730XD
  • 2x Intel® Xeon® E5-2697v4 (18 cores @ 2.3 GHz)
  • 128GiB DDR4 DRAM
  • 4x 800GB Dell Express Flash NVMe SM1715
  • 8x 800GB Toshiba PX04SHB080 SSD
  • Cavium FastLinQ® QL41262H 25GbE Adapter (2-Port)
  • BIOS configuration
    • BIOS performance profile
    • C States disabled
    • HT On

We deployed Windows Server 2016 Storage Spaces Direct and VMFleet with:

  • 4x 3-way mirror CSV volumes
  • Cache configured for read/write
  • 18 VMs per node

First, we configured VMFleet for throughput. Each VM runs DISKSPD, with 512KB IO size at 100% read at various queue depths:

512K Bytes

iWARP RoCE

iWARP and RoCE

Queue Depth

BW (GB/s) Read latency (ms) BW (GB/s) Read latency (ms) BW (GB/s) Read latency (ms)

1

33.0 1.1 32.2 1.2 33.2

1.1

2

39.7 1.9 39.4 1.9 40.1

1.9

4

41.0 3.7 40.6 3.7 41.0

3.7

8 41.4 7.4 41.1 7.4 41.6

7.4

 

Aggregate throughput is very close to what’s possible with the cache devices in the system. Also, the aggregate throughput and latency is very consistent whether it is with iWARP, RoCE or using both at the same time. In these tests, DCB is configured to enable PFC for RoCE but iWARP is without any DCB configuration.

Next, we reconfigured VMFleet for IOPS. Each VM runs DISKSPD, with 4KB IO size at 90% read and 10% write at various queue depths:

4K Bytes

iWARP RoCE

iWARP and RoCE

Queue Depth

IOPS Read latency (ms) IOPS Read latency (ms) IOPS Read latency (ms)

1

272,588 0.253 268,107 0.258 271,004

0.256

2

484,532 0.284 481,493 0.287 482,564

0.284

4

748,090 0.367 729,442 0.375 740,107

0.372

8 1,177,243 0.465 1,161,534 0.474 1,164,115

0.472

 

Again, very similar and consistent IOPS rates and latency numbers for iWARP, RoCE or when using both at the same time.

As mentioned in the beginning, more and more partners are offering RDMA network adapters, most focusing on either iWARP or RoCE. The Cavium FastLinQ® 41000 can do both, which means customers can deploy either or both, or even change over time if the need arises. The numbers look very good and consistent regardless if it used with iWARP, RoCE or both at the same time.

What do you think?

Until next time

Claus

Object Storage Survey for Windows

$
0
0

Hi folks, Ned here again. We have released a new survey for on-premises object storage on Windows. We want to better understand your specific object storage workloads in the datacenter and design these products to meet your needs. The survey will only take a couple of minutes and is completely anonymous.

Click here for survey 

Note: this is different than the previous survey you might have filled out a few weeks ago. We’re iterating. 🙂

Thanks!

Storage Spaces Direct with Samsung Z-SSD™

$
0
0

Hello, Claus here again.

Today we are going to take a look at a new device from Samsung, the SZ985, which is marketed as a ultra-low latency NVMe SSD based on Samsung Z-NAND flash memory and a new NVMe controller. It offers ~3GB/s throughput, and random read operations in 20µs with capacities up to 3.2TB and 30 drive writes per day (DWPD).

We added two Z-SSD devices to each server in a 4-node cluster, each node configured with the following hardware:

  • 2x Intel® Xeon® E5-2699v4 (22 cores @ 2.2 GHz)
  • 128GiB DDR4 DRAM
  • 2x 800GB Samsun Z-SSD
  • 20x SATA SSD
  • 1x Mellanox CX-3 Pro 2x40Gb
  • BIOS configuration
    • BIOS performance profile
    • C States disabled
    • Hyper-threading on
    • Speedstep/Turbo on

We deployed Windows Server 2016 Storage Spaces Direct and VMFleet with:

  • 4x 3-way mirror CSV volumes
  • Cache configured for read/write
  • 44 VMs per node, each with
    • DISKSPD v2.0.17
    • 5GB working set (~900GB total)
    • 1 IO thread
    • 8 QD

First, we took a look at 100% read scenario. The graph shows that observed latency at the top of the storage stack stayed relative constant as we ramped IOPS. The highpoint is 200µs @ 200K IOPS, but staying between 100-150µs at 400K+ IOPS.

The graph below shows the CPU utilization linear increase as we ramp up IOPS, which is expected.

Second, we took a look at 90% read and 10% write scenario, which is more common. Writes have to be performed on multiple nodes to ensure resiliency, which involves network communication and thus is  bit slower than local read operations, but stayed under 1ms even at 1M+ IOPS, and reads stayed very close to what was seen with 100% read.

Similar to the 100% read scenario, the CPU utilization increases linear as we increase IOPS pressure on the system in the 90% read and 10% write scenario.

It is good to see the innovation in driving down latency in flash storage to the benefit of relational database servers, like SQL Server, and caches, like the Storage Spaces Direct cache. I look forward to seeing these devices in our Windows Server Software-Defined datacenter solutions.

What do you think?

Until next time

Claus

 

Survey: Storage Replica “Lite”

$
0
0
Hey folks, Ned here again. Are you interested in a reduced cost but reduced functionality version of Storage Replica? We are too. Come take a 2-minute survey: https://aka.ms/srlite1 Ned “this survey promises nothing” Pyle... Read more

Work Folders on-demand file access feature for Windows 10

$
0
0

We’re excited to announce the on-demand file access feature for Work Folders will be available in the next Windows 10 release! The on-demand file access feature enables you to see and access all of your files. You control which files are stored on your PC and available offline. The rest of your files are always visible and don’t take up any space on your PC, but you need connectivity to the Work Folders server to access them.

Prerequisites

If you’re interested in evaluating the on-demand file access feature prior to the next Windows 10 release, join the Windows Insider program and install Windows 10 build 17063 or later.

How to enable the on-demand file access feature

There are three options to enable the on-demand file access feature:

Option #1: Work Folders setup wizard
When configuring Work Folders on a PC, verify the Enable on-demand file access on this PC setting is selected in the setup wizard.

Option #2: Work Folders control panel applet
If Work Folders is currently configured on a PC, open the Work Folders control panel applet and select the Enable on-demand file access setting.

Option #3: Work Folders group policy setting
Administrators can control the on-demand file access feature on PCs by setting the On-demand file access preference group policy setting.

These options can also be used to disable the on-demand file access feature if you want all files to be available offline on a PC.

File status in File Explorer

After enabling the on-demand file access feature, your files and folders stored in the Work Folders directory will have these statuses in File Explorer:

Available when online

Available when online files don’t take up space on your computer. You see a cloud icon for each Available when online file in File Explorer, but the file doesn’t download to your device until you open it. You can only open Available when online files when your device is connected to the internet. However, your Available when online files will always be visible in File Explorer even if you are offline.

Available on this device

When you open an Available when online file, it downloads to your device and becomes an Available on this device file. You can open an Available on this device file anytime, even without Internet access. If you need more space, you can change the file back to Available when online. Just right-click the file and select Free up space.

Always available on this device

Only files that you mark as Always keep on this device have the green circle with the white check mark. These files will always be available even when you’re offline. They are downloaded to your device and take up space.

Frequently Asked Questions

How do I make a file or folder available for offline use?

  • Right-click a file or folder in the Work Folders directory
  • Select Always keep on this device

How do I make a file or folder available when online?

  • Right-click a file or folder in the Work Folders directory
  • Select Free up space

I upgraded my PC to Windows 10 build 17063 or later, why do I not see the “Always keep on this device” and “Free up space” options in File Explorer when I right-click a file or folder?

  • The on-demand file access feature is disabled by default if Work Folders was configured on the PC prior to upgrading. To enable the on-demand file access feature, select the Enable on-demand file access setting in the Work Folders control panel applet or set the On-demand file access preference group policy setting.

Feedback or Issues

If you have feedback or experience an issue, please post in the File Services and Storage TechNet forum.

Additional Resources

Survey: Local Users and Groups on Windows Server in AD domains

$
0
0
Hey folks, Ned here again. We need to understand how or if you still use local security principals on Windows Server in Active Directory environments. Come take a 60 second survey: https://aka.ms/LocalSecurity1   Ned “this survey is weird, right?” Pyle... Read more

It’s Gone To Plaid: Storage Replica and Chelsio iWARP Performance

$
0
0

Hi folks, Ned here again. A few years ago, I demonstrated using Storage Replica as an extreme data mover, not just as a DR solution; copying blocks is a heck of lot more efficient than copying files. At the time, having even a single NVME drive and RDMA networking was gee-wiz. Well, times have changed, and all-flash storage deployments are everywhere. Even better, RDMA networking like iWARP is becoming commonplace. When you combine Windows Server 2012 R2, Windows Server 2016, or the newly announce Windows Server 2019 with ultrafast flash storage and ultrafast networking, you can get amazing speed results.

What sort of speeds are we talking about here?

The Gear

The good folks at Chelsio  – makers of iWARP RDMA networking used by SMB Direct – setup a pair of servers with the following config:

  • OS: Windows 2016
  • System Model: 2x Supermicro X10DRG-Q
  • RAM:128GB per node
  • CPU: Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz (2 sockets, 24 cores) per node
  • INTEL NVME SSD Model: SSDPECME016T4 (1.6TB) – 5x in source node
  • Micron NVME SSD Model: MTFDHAX2T4MCF-1AN1ZABYY (2.4TB) – 5x in destination node
  • 2x Chelsio T6225-CR 25Gb iWARP RNICs
  • 2x Chelsio T62100-CR 100Gb iWARP RNICs

25 and 100 Gigabit networking with CPU offloading the transfers and doing remote direct memory placement with SMB!? Yes please.

The Goal

We wanted to see if Storage Replica block copying with NVME could fully utilize an iWARP RDMA network and what the CPU overhead would look like. When using NVME drives, servers are much more likely to run out of networking under high data transfer workloads than storage IOPS and MB/sec throughput. 10Gb ethernet and TCP simply cannot keep up, and their need to use the motherboard’s CPU for all the work restricts perf even further.

We already know that straight file copying would not be able to match the perf of Storage Replica block copy and also show significant CPU usage on each node. But where would the bottleneck be now?

The Grades

25Gb

First, I tried the 25Gb RDMA network, configuring Storage Replica to perform initial sync and clone the entire 2TB volume residing on top of the storage pool.

As you can see, this immediately consumed the entire 25Gb network. The NVME is just too fast, and Storage Replica is a kernel mode disk filter that pump data blocks at the line rate of the storage.

The CPU and memory are looking very low. This is the advantage that SMB Direct & RDMA offloading is bringing to the table; the server is left with all the resources to do its real job, and not deal with user-mode nonsense.

In the end this is quite a respectable run and the data moved very fast. Copying 2TB in 12 minutes with no real CPU or memory hit is great by any definition.

 

But we can do better 😁.

100Gb

Same test with the same servers, storage, volumes, Storage replica – except this time I’m using 100Gb Chelsio iWARP networking.

I like videos. Let’s watch a video this time (turn on CC if you’re less familiar with SR and crank the resolution).

Holy smokes!!! The storage cannot keep up with the networking. Let me restate:

The striped NVME drives cannot keep up with SMB Direct and iWARP.

We just pushed 2 terabytes of data over SMB 3.1.1 and RDMA in under three minutes. That’s ~10 gigabytes a second.

The Rundown

When you combine Windows Server and Chelsio iWARP RDMA, you get ultra-low latency, low-CPU, low-memory, high throughput SMB and workload performance in:

  • Storage Spaces Direct
  • Storage Replica
  • Hyper-V Live Migration
  • Windows Server and Windows 10 Enterprise client SMB operations

You will not be disappointed.

A huge thanks to the good folks at Chelsio for the use of their loaner gear and lab. Y’all rock.

–          Ned Pyle

PS:

Storage Spaces Direct: 10,000 clusters and counting!

$
0
0

It’s been 18 months since we announced general availability of Windows Server 2016, the first release to include Storage Spaces Direct, software-defined storage for the modern hyper-converged datacenter. Today, we’re pleased to share an update on Storage Spaces Direct adoption.

We’ve reached an exciting milestone: there are now over 10,000 clusters worldwide running Storage Spaces Direct! Organizations of all sizes, from small businesses deploying just two nodes, to large enterprises and governments deploying hundreds of nodes, depend on Windows Server and Storage Spaces Direct for their critical applications and infrastructure.

Hyper-Converged Infrastructure is the fastest-growing segment of the on-premises server industry. By consolidating software-defined compute, storage, and networking into one cluster, customers benefit from the latest x86 hardware innovation and achieve cost-effective, high-performance, and easily-scalable virtualization.

We’re deeply humbled by the trust our customers place in Windows Server, and we’re committed to continuing to deliver new features and improve existing ones based on your feedback. Later this year, Windows Server 2019 will add deduplication and compression, support for persistent memory, improved reliability and scalability, an entirely new management experience, and much more for Storage Spaces Direct.

Looking to get started? We recommend these Windows Server Software-Defined offers from our partners. They are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly.

To our customers and our partners, thank you.

Here’s to the next 10,000!

Note on methodology: the figure cited is the number of currently active clusters reporting anonymized census-level telemetry, excluding internal Microsoft deployments and those that are obviously not production, such as clusters that exist for less than 7 days (e.g. demo environments) or single-node Azure Stack Development Kits. Clusters which cannot or do not report telemetry are also not included.

Introducing the Windows Server Storage Migration Service

$
0
0

Hi Folks, Ned here again with big news: Windows Server 2019 Preview contains an entirely new feature!

The Storage Migration Service helps you migrate servers and their data without reconfiguring applications or users.

  • Migrates unstructured data from anywhere into Azure & modern Windows Servers
  • It’s fast, consistent, and scalable
  • It takes care of complexity
  • It provides an easily-learned graphical workflow

My team has been working on this critter for some time and today you’ll learn about what it can do now, what it will do at RTM, and what the future holds.

Did I mention that it comes in both Standard and Datacenter editions and has a road map that includes SAN, NAS, and Linux source migrations?

Come with me…

Why did we make this?

You asked us to! No really, I dug through endless data, advisory support cases, surveys, and first party team conversations to find that our #1 issue keeping customers on older servers was simply that migration is hard, and we don’t provide good tools. Think about what you need to get right if you want to replace an old file server with a new one, and not cause data loss, service interruption, or outright disaster:

  • All data most transfer
  • All shares and their configuration must transfer
  • All share and file system security must transfer
  • All in-use files must transfer
  • All files you, the operator, don’t have access to must transfer
  • All files that changed since the last time you transferred must transfer
  • All use of local groups and users must transfer
  • All data attributes, alternate data streams, encryption, compression, etc. must transfer
  • All network addresses must transfer
  • All forms of computer naming, alternate naming, and other network resolution must transfer
  • Whew!

If you were to wander into this Spiceworks data on market share (a little old but still reasonably valid), you’ll see some lopsided ratios:

(source)

A year and a half later, there are a few million Windows Server 2016 nodes in market that have squeezed this balloon, but there’s even more Windows Server 2012 and still plenty of 2008 families, plus too much wretched, unsupported Windows Server 2003. Did you know that Windows Server 2008 Support ends in January of 2020? Just 20 months away from the end of life for WS2008 and we still have all this 2003!

The Storage Migration Service (Updated: April 12, 2018)

Important: This section of the blog post is going to change very often, as with the Windows Insider preview system and Windows Admin Center’s preview extension system, I can give you new builds, features and bug fixes very rapidly. You’ll want to check back here often.

Windows Server 2019 and the Storage Migration Service are not supported in production environments!

In this first version, the feature copies over SMB (any version). Targets like Azure File Sync servers, IaaS VMs running in Azure or MAS, or traditional on-prem hardware and VMs are all valid targets.

The feature consists of an orchestrator service and one or more proxy services deployed. Proxies add functionality and performance to the migration process, while the orchestrator manages the migration and stores all results in a database.

Storage replica operates in three distinct phases:

  1. Inventory – an administrator selects nodes to migrate and the Storage Migration Service orchestrator node interrogates their storage, networking, security, SMB share settings, and data to migrate
  2. Transfer – the administrator creates pairings of source and destinations from that inventory list, then decides what data to transfer and performs one more or transfers
  3. Cutover – (not yet available) – the administrator assigns the source networks to the destinations and the new servers take over the identity of the old servers. The old servers enter a maintenance state where they are unavailable to users and applications for later decommissioning, while the new servers use the subsumed identities to carry on all duties.

Walk through

Requirements

You’ll need the following to start evaluating this feature:

Orchestrator:

Supported source operating systems VM or hardware (to migrate from):

  • Windows Server 2003
  • Windows Server 2008
  • Windows Server 2008 R2
  • Windows Server 2012
  • Windows Server 2012 R2
  • Windows Server 2016
  • Windows Server 2019 Preview

Supported destination operating system VM or hardware (to migrate to):

  • Windows Server 2019 Preview*

* Technically your destination for migration can be any Windows Server OS, but we aren’t testing that currently. And when we release the faster and more efficient target proxy system in a few weeks it will only run on Windows Server 2019.

Security:

  • All computers above domain-joined (this will not be required in later releases)
  • You must provide a migration account that is an administrator on selected source computers
  • You must provide a migration account that is an administrator on selected destination computers
  • The following firewall rules must be enabled INBOUND on source and destination computers:
    • “File and Printer Sharing (SMB-In)”
    • “Netlogon Service (NP-In)”
    • “Windows Management Instrumentation (DCOM-In)”
    • “Windows Management Instrumentation (WMI-In)”

Install

Use the Windows Admin Center, Server Manager, or PowerShell to install the Storage Migration Service. The Features to install on the orchestrator node are:

  • “Storage Migration Service”
  • “Storage Migration Service Proxy”
  • “Storage Migration Service tools” (under Remote Server Administration Tools, Feature Administration Tools)

Example, using Windows Admin Center

Run your first test inventory and transfer

Now you’re ready to start migrating.

  1. Logon to your Windows Admin Center instance and connect to the orchestrator node as an administrator.

2.  Ensure the Storage Migration Service extension is in the Tools menu (if not, install the extension) and click it.

3. Observe the landing page. There is a summary that lists all active and completed jobs. Jobs contain one or more source computers to inventory, transfer, and cutover as part of a migration.

Example, with several previous jobs

4. You are about to begin the Inventory Click New Job. Enter a job name, click Next.

5. Enter Source Credentials that are an administrator on your source (to be migrated from) computers and click Next.

6. Click Add Device and add one or more source computers. These must be Windows Server and should contain SMB Shares with test data on them that you want to migrate.

7. Click Start Scan and wait for the inventory to complete.

8. Observe the results. You can open the Details at the bottom of the page with the caret in the lower right. When done reviewing, click Finish Inventory.

9. You are now in the Transfer You are always free to return to the Inventory phase and redefine what it gathers, get rid of the job, create a new job, or proceed forward to data transfer. As you can see, each phase operates in a similar fashion by providing credentials, setting rules, defining nodes, then running in a result screen.

10. Provide credentials, destination computers mapped to the source computers, ensure each server you wish to migrate is set to Included in transfer, review your settings, then proceed with Start Transfer.

Note: The destination computer requires the same volumes exist as the source. In a coming preview you will be able to alter the mappings of volumes. Also, the Transfer Settings page is grayed out currently and you cannot update it.

11. Observe the migration. You will see data transfers occur in relative real time (periodically refreshed) as the orchestrator copies data between source and destination nodes. When complete, examine the destination server and you’ll find that Storage Migration Service recreated all shares, folders, and files with matching security, attributes, characteristics (see Known Issues below for not-yet-released functionality here). It’s copy rate is currently like single-threaded robocopy performance, per node. Note the Export option that allows you to save a complete database dump of the transfer operations for auditing purposes.

12. Cutover: we’ll unlock this last step for preview testing soon!

Known issues

Applies to:

  • Windows Server 2019 Insider Preview Build 17639
  • Storage Migration Service Extension Version 0.1.0

The following issues are known:

  • Extension:
    • Various CSS, alignment, typography fit and finish issues in the Windows Admin Center extension
    • Inventory does not validate hostnames
    • Volume mappings currently 1 to 1 and must match by letter
    • Only uses local proxy service on the gateway, not any proxies installed on destination nodes
    • Many transfer options not exposed yet
  • Orchestrator:
    • Cutover not available
    • Change detection when running multiple transfer passes not complete, optional destination data preservation not added yet
    • Transfer source account credentials are added to the destination root path ACL instead of perfectly preserving source security
    • Local principals not recreated on destination
    • No EFS, compressed files, reparse point support yet
    • No Directory attributes or Alternate Data Stream support yet
    • Database size limited by free space on system drive, stored under c:\programdata\storagemigrationservice
    • Newly transferred counts in Transfer SMB Detail can be inaccurate with nested shares
    • Some files may be recopied in second pass due to LMT mismatch
    • Incomplete event logs, messages
    • Not yet performance optimized for transfer (copies are single-threaded, not buffer optimized, etc.).

Roadmap

What does the future hold? We have a large debt of work already accumulated for when we complete the SMB and Windows Server transfer options. Things on the roadmap – not promised, just on the roadmap 😊:

  1. Network range and AD source computer scanning for inventory
  2. Samba, NAS, SAN source support
  3. NFS for Windows and Linux
  4. Block copy transfers instead of file-level
  5. NDMP support
  6. Hot and cold data detection on source to allow draining untouched old files
  7. Azure File Sync and Azure Migrate integration
  8. Server consolidation
  9. More reporting options

Feedback and Contact

Please make sure the issue isn’t already noted above before filing feedback and bugs!

  • Use the Feedback Hub tool include in Windows 10 to file bugs or feedback. When filing, choose Category “Server,” subcategory “” It helps routing if you put Storage Migration Service in the title.
  • You can also provide feature requests through our UserVoice page at https://windowsserver.uservoice.com/forums/295056-storage. Share with colleague and industry peers in your network to upvote items that are important to you.
  • If you just want to chat with me privately about feedback or a request, send email to smsfeed@microsoft.com. Keep in mind that I may still make you go to feedback hub or UserVoice.

 

Now get to testing and keep an ear out for updates. We plan to send out new builds of Windows Server 2019 and the Storage Migration service very often until we get close to RTM. I will post announcements here and on twitter.

Here’s some background music to help

Ned “reel 2 real” Pyle

Manage Storage Spaces Direct in Windows Server 2016 with Windows Admin Center (Preview)

$
0
0

Hi! I’m Cosmos. Follow me @cosmosdarwin on Twitter.

At Microsoft Ignite 2017, we teased the next-generation in-box management experience for Storage Spaces Direct and Hyper-Converged Infrastructure built on Windows Admin Center, known then as ‘Project Honolulu’. Until now, this experience has required an Insider Preview build of Windows Server 2019. The most consistent feedback we’ve received by far has been to add support for Windows Server 2016.

The Hyper-Converged Cluster Dashboard in Windows Admin Center, version 1804.

The Hyper-Converged Cluster Dashboard in Windows Admin Center, version 1804.

Support for Windows Server 2016

Today, we’re delighted to announce it’s here! With the April update of Windows Admin Center and the latest Cumulative Update of Windows Server 2016, you can now use Windows Admin Center to manage the Hyper-Converged Infrastructure you already have today:

Windows Admin Center brings together compute, storage, and soon networking within one purpose-built, consistent, and interconnected experience. You can browse your host servers and drives; monitor performance and resource utilization across the whole cluster; enjoy radically simple workflows to provision and manage virtual machines and volumes; and much more.

The over 10,000 clusters worldwide running Storage Spaces Direct can now benefit from these capabilities.

Get started

To get started, download Windows Admin Center, the next-generation in-box management tool for Windows Server. It’s free, takes less than five minutes to install, and can be used without an Internet connection.

Then, install the April 17th 2018-04 Cumulative Update for Windows Server 2016, KB4093120, on every server in your Storage Spaces Direct cluster. The Hyper-Converged Infrastructure experience depends on new management APIs that are added in this update.

For more detailed instructions, read the documentation.

Feedback

Windows Admin Center for Hyper-Converged Infrastructure is being actively developed by Microsoft. Although the Windows Admin Center platform is generally available, the Hyper-Converged Infrastructure experience is still in Preview. It receives frequent updates that improve existing features and add new features.

Please share your feedback – let us know what’s working and what needs to be improved.

6 tutorials in under 6 minutes

If you’re just getting started, here are some quick Storage Spaces Direct tutorials to help you learn how Windows Admin Center for Hyper-Converged Infrastructure is organized and works. These videos were recorded with Windows Admin Center version 1804 and an Insider Preview build of Windows Server 2019.

Create volume, three-way mirror
Create volume, mirror-accelerated parity
Open volume and add files
Turn on deduplication and compression
Expand volume
Delete volume

For more things to try, see the documentation.

Let us know what you think!

Storage Replica Updates in Windows Server 2019 Insider Preview Build 17650

$
0
0

Heya folks, Ned here again. The folks over at Windows Server Insiders just dropped the new album: Build 17650. For those interested in having a job after a disaster strikes, there are three new Storage Replica options available::

  • Storage Replica Standard
  • Storage Replica Log v1.1
  • Storage Replica Test Failover

Storage Replica Standard

SR is now available on Windows Server 2019 Preview Standard Edition, not just on Datacenter Edition. When installed on servers running Standard Edition, SR has the following limitations:

  • SR replicates a single volume instead of an unlimited number of volumes.
  • Volumes can have one partnership instead of an unlimited number of partners.
  • Volumes can have a size of up to 2 TB instead of an unlimited size.

The experience is otherwise unchanged. You can still use standalone servers, cluster to cluster, or stretch clusters. You can still manage it all with Windows Admin Center or the built-in PowerShell. You can still pick between synchronous and asynchronous replication.

These limits are not decided. See below for feedback options. If you ask for all the unlimited Datacenter features in SR though, I will only nod politely. 🙂

Storage Replica Log v1.1

We made performance improvements to the SR log system, leading to far better replication throughput and latency, especially on all-flash arrays and Storage Spaces Direct (S2D) clusters that replicate between each other. To take advantage of this update, you must upgrade all servers participating in replication to Windows Server 2019. We’re not done here – we have an entirely new log we’ve been working on – but this optimization in the existing CLFS-based system makes big improvements. 

Storage Replica Test Failover

It’s now possible to mount a writable snapshot of replicated destination storage, in order to perform a backup of the destination or simply test your data and failover strategy. By grabbing an unused volume, we can temporarily mount a snapshot of the replicated storage. Replication of the original source continues unabated while you perform your tests; your data is never unprotected and your snapshot changes will not overwrite it. When you are done, discard the snapshot. For steps on using this, check out https://aka.ms/srfaq under section “Can I bring a destination volume online for read-only access?

Here’s a quick demo:

Please let us know how things go using https://windowsserver.uservoice.com or the Windows 10 Feedback Hub (Category: Server, Subcategory: Storage). We are very interested in your feedback, the sooner the better.

 

— Ned “Yes, you finally got Standard Edition, Aidan” Pyle

Storage Migration Service preview extension update 17666

$
0
0

Heya folks, Ned here again. We have released a new Windows Admin Center preview extension for Storage Migration Service, now at version 0.1.17666. It includes:

  1. UI look and feel updates with easier usability and workflow.
  2. Some UI bug fixes and associated performance improvements.
  3. A major performance fix that greatly reduces CPU and memory usage on the orchestrator and destination servers.
  4. Ability to directly file feedback & bugs from the extension.

If you are unfamiliar with the new Windows Server Storage Migration Service, it helps you migrate servers and their data without reconfiguring applications or users. It’s available in Windows Server 2019 Insider Preview release for testing and feedback.

  • Migrates unstructured data from anywhere into Azure & modern Windows Servers
  • It’s fast, consistent, and scalable
  • It takes care of complexity
  • It provides an easily-learned graphical workflow

For more info, please review https://aka.ms/stormigser. Feel free to ask me anything on @nerdpyle or email smsfeed@microsoft.com 

 

– Ned “text message” Pyle

Creating remediation actions for System Insights

$
0
0

Quick overview

System Insights enables you to configure custom remediation scripts to automatically address the issues detected by each capability. For each capability, you can set a custom PowerShell script for each prediction status. Once a capability returns a prediction status, System Insights automatically invokes the associated script to help address the issue reported by the capability, so that you can take correction action automatically rather than needing to manually intervene.

You can read more on how to set remediation actions in the System Insights management page. This blog, however, provides concrete examples and PowerShell scripts to help you get started writing your own remediation actions.

Parsing the capability output

The volume consumption forecasting capability and networking capacity forecasting capability report the most severe status across all of your volumes and network adapters respectively. Before writing any remediation scripts, we need to determine the specific volumes or network adapters that reported the result.

Fortunately, System Insights outputs the result of each capability into a JSON file, which contains the specific forecasting results for each volume or network adapter. This is really helpful, as this file format and the output schema allows you to easily programmatically determine the status of specific volumes and network adapters. For the default forecasting capabilities, the JSON file uses the following schema:

  1. Status: Top level status.
  2. Status Description: Top level status description.
  3. Prediction Results: An array of prediction results. For volume and networking forecasting, this contains an entry for each volume or network adapter. For total storage and CPU forecasting, this only contains one entry.
    1. Identifier: The GUID of the instance. (This field is present for all capabilities, but it is only applicable to volumes and network adapters.)
    2. Identifier Friendly Name: The friendly name of the instance.
    3. Status: The status for the instance.
    4. Status Description: The status description for the instance.
    5. Limit: The upper limit for that instance, e.g. the volume size.
    6. Observation Series: An array of historical data that’s inputted into the forecasting capability:
      1. DateTime: The date the data point was recorded.
      2. Value: The observed value, e.g. the used size of the volume.
    7. Prediction: An array of predictions based on the historical data in the Observation Series.
      1. DateTime: The date of the predicted data point.
      2. Value: The value predicted.

Using the schema above, you can now write a script to return all volumes that have a prediction status:

 

<#
Get-Volumes-With-Specified-Status
Retrieves all volumes that have a given prediction status. 

:param $Status: [string] Prediction status to look for.
:return: [array] List of volumes with the relevant status.
#>
Function Get-Volumes-With-Specified-Status {
    Param (
        $Status
    )

    $Volumes = @()
    # Get the JSON result and store it in $Output.
    $Output = Get-Content (Get-InsightsCapabilityResult -Name "Volume consumption forecasting").Output | ConvertFrom-Json
  
    # Loop through volumes, checking if they have the specified status.
    $Output.PredictionResults | ForEach-Object {
        if ($_.Status -eq $Status) {
            # Format Id into expected format.
            $Id = $_.Identifier
            $Volumes += "\\?\Volume{$Id}\"
        }
    }
    $Volumes
}

Extending a volume

Now that you can determine the volumes that have a specific status, you can write more incisive remediation actions. One example is extending a volume when it’s forecasted to exceed the available capacity.
The script below is a best effort to extend a volume a specified percentage beyond its current size.

<#
Extend-Volume
If possible, extend the specified volume a specified percentage beyond its current size. If can't extend to this size, this function will extend to the maximum size possible. 

:param $VolumeId: [string] ID of the volume to extend.
:param $Percentage: [float] Percentage to extend the volume.
#>
Function Extend-Volume {
    Param (
        $VolumeId,
        $Percentage
    )

    $Volume = Get-Volume -UniqueId $VolumeId

    if ($Volume) {
        # See if the volume can be extended. 
        $Sizes = $Volume | Get-Partition | Get-PartitionSupportedSize

        # Must be able to extend by at least 1Mib
        if ($Sizes.sizeMax - $Volume.Size -le 1048576) {
            Write-Host "This volume can't be extended." -ForegroundColor Red
            return
        }

        $OldSize = $Volume.Size

        # Volume size if extended by specified percentage.
        $ExtendedSize = $Volume.Size * $Percentage

        # Select minimum of new size and max supported size.
        $NewSize = [math]::Min($ExtendedSize, $Sizes.sizeMax)
     
        try {
            # Extend partition
            $Volume | Get-Partition | Resize-Partition -Size $NewSize
            Write-Host "Successfully extended partition." -ForegroundColor Green
            Write-Host "   Old size: $OldSize."
            Write-Host "   New size: $NewSize."
       
         } catch {
             Write-Host "Failed to extend volume." -ForegroundColor Red
         }
    } 
    else {
        Write-Host "The volume with ID: $VolumeId wasn't found." -ForegroundColor Red
    }
}

Putting these together, you can extend all volumes that have reported a specific status:

Get-Volumes-With-Specified-Status $Status | ForEach-Object {
    Extend-Volume $_, $ResizePercentage
}

Running disk cleanup

For total storage consumption forecasting or volume consumption forecasting, rather than provision more capacity or extending a volume, you can free up space on your machine by deleting unused system files using Disk Cleanup. The script below allows you to configure Disk Cleanup preferences, places those preferences in the registry, and then runs Disk Cleanup across all drives on your machine using the settings in the registry. (Some of these fields only apply to the boot drive, but disk cleanup will automatically determine the appropriate fields to clean on each drive.)

To set your preferences, uncomment the categories listed at the beginning of the script. Once you have uncommented your preferences, specify an ID for this set of preferences and run the script:

$Id = 6
DiskCleanupScript.ps1 6

Warning: The following script is pretty long due to the many different options exposed by Disk Cleanup, but hopefully the actual logic to run disk clean up is pretty straightforward, which can be found at the bottom of the script.

 

param(
# Clean up ID must be an integer between 1-9999
[string] $UserCleanupId
)

<#
Create-Cleanup-List
Creates a list of the items Disk Cleanup will try to clean.

:return: [array] An array of the various items you wish to clean.
#>
Function Create-Cleanup-List {

    # Array to store the file types to clean up.
     $ToClean = @()

    <#
    Item: Temporary Setup Files
    Description: These files should no longer be needed. They were originally created by a setup program that is no longer running.
    #>
    # $ToClean += "Active Setup Temp Folders"

    <#
    Item: Old Chkdsk Files
    Description: When Chkdsk checks your disk drive for errors, it might save lost file fragments as files in your disk drive's root folder. These files are unnecessary and can be removed.
    #>
    $ToClean += "Old ChkDsk Files"

    <#
    Item: Setup Log Files
    Description: Files created by Windows.
    #>
    # $ToClean += "Setup Log Files"

    <#
    Item: Windows Update Cleanup
    Description: Windows keeps copies of all installed updates from Windows Update, even after installing newer versions of updates. Windows Update cleanup deletes or compresses older versions of updates that are no longer needed and taking up space. (You might need to restart your computer.)
    #>
    # $ToClean += "Update Cleanup"

    <#
    Item: Windows Defender Antivirus
    Description: Non-critical files used by Windows Defender Antivirus.
    #>
    # $ToClean += "Windows Defender"

    <#
    Item: Windows Upgrade Log Files
    Description: Windows upgrade log files contain information that can help identify and troubleshoot problems that occur during Windows installation, upgrade, or servicing. Deleting these files can make it difficult to troubleshoot installation issues.
    #>
    # $ToClean += "Windows Upgrade Log Files"

    <#
    Item: Downloaded Program Files
    Description: Downloaded Pgoram Files are ActiveX controls and Java applets downloaded automatically from the Internet when you view certain pages. They are temporarily stored in the Downloaded Program Files folder on your hard disk. 
    #>
    # $ToClean += "Downloaded Program Files"

    <#
    Item: Temporary Internet Files
    Description: The Temporary Internet Files folder contains webpages stored on your hard disk for quick viewing. Your personalized settings for webpages will be left intact.
    #>
    # $ToClean += "Internet Cache Files"

    <#
    Item: System Error Memory Dump Files
    Description: Remove system error memory dump files.
    #>
    # $ToClean += "System Error Memory Dump Files"

    <#
    Item: System Error Minidump Files
    Description: Remove system error minidump files.
    #>
    # $ToClean += "System Error Minidump Files"

    <#
    Item: Files discarded by Windows Update
    Description: Files from a previous Windows installation. As a precaution, Windows upgrade keeps a copy of any files that were not moved to the new version of Windows and were not identified as Windows system files. If you are sure that no user's personal files are missing after the upgrade, you can delete these files.
    #>
    # $ToClean += "Upgrade Discarded Files"

    <#
    Item: System created Windows Error Reporting Files
    Description: Files used for error reporting and solution checking.
    #>
    # $ToClean += "Windows Error Reporting Files"

    <#
    Item: Windows ESD Installation Files
    Description: You will need these files to Reset or Refresh your PC.
    #>
    # $ToClean += "Windows ESD Installation Files"

    <#
    Item: BranchCache
    Description: Files created by BranchCache service for caching data.
    #>
    # $ToClean += "BranchCache"

    <#
    Item: DirectX Shader Cache
    Description: Clean up files created by the graphics system which can speed up application load time and improve responsiveness. They will be re-generated as needed.
    #>
    # $ToClean = "D3D Shader Cache"

    <#
    Item: Previous Windows Installation(s)
    Description: Files from a previous Windows installation. Files and folders that may conflict with the installation of Windows have been moved to folders named Windows.old. You can access data from the previous Windows installations in this folder.
    #>
    # $ToClean += "Previous Installations"

    <#
    Item: Recycle Bin
    Description: The Recycle Bin contains files you have deleted from your computer. These files are not permanently removed until you empty the Recycle Bin.
    #>
    # $ToClean += "Recycle Bin"

    <#
    Item: RetailDemo Offline Content
    Description: 
    #>
    # $ToClean += "RetailDemo Offline Content"

    <#
    Item: Update package Backup Files
    Description: Windows saves old versions of files that have been updated by an Update package. If you delete the files, you won't be able to uninstall the Update package later.
    #>
    # $ToClean += "Service Pack Cleanup"

    <#
    Item: Temporary Files
    Description: Programs sometimes store temporary information in a TEMP folder. Before a program closes, it usually deletes this information. You can safely delete temporary files that have not been modified in over a week.
    #>
    # $ToClean += "Temporary Files"

    <#
    Item: Temporary Windows installation files
    Description: Installation files used by Windows setup. These files are left over from the installation process and can be safely deleted.
    #>
    # $ToClean += "Temporary Setup Files"

    <#
    Item: Thumbnail Cache
    Description: Windows keeps a copy of all your picture, video, and document thumbnails, so they can be displayed quickly when you open a folder. If you delete these thumbnails, they will be automatically recreated as needed.
    #>
    # $ToClean += "Thumbnail Cache"

    <#
    Item: User File History
    Description: Windows stores file versions temporarily on this disk before copying them to the designated File History disk. If you delete these files, you will lose some file history.
    #>
    # $ToClean += "User file versions"

    # Return cleaning list
    $ToClean
}

<#
Create-Cleanup-Id
Properly formats the cleanup Id. Cleanup Id must be 4 characters.

:return: [string] Properly formatted cleanup Id.
#>
Function Create-Cleanup-Id {
    # Determine how many zeros need to be inserted.
    $Zeros = 4 - $UserCleanupId.length

    if ($Zeros -lt 0) {
        Write-Host "The cleanup Id exceeds 4 characters. Specify an Id with four characters or less." -ForegroundColor Red
        return
    }

    $ZerosString = ""
    For ($i = 0; $i -lt $Zeros; $i++) {
        $ZerosString = "0$ZerosString"
    }
    "$ZerosString$UserCleanupId"
}

<#
Run-Disk-Cleanup
Runs disk cleanup using the cleanup Id and items specified in Create-Cleanup-Id and Create-Cleanup-List. 
#>
Function Run-Disk-Cleanup {
 
    $CleanupId = Create-Cleanup-Id
    if ($CleanupId) {
        # Must define cleanup preferences in the registry. 
        $RegKeyDirectory = "HKLM:\Software\Microsoft\Windows\CurrentVersion\Explorer\VolumeCaches\"
        $RegKeyString = "StateFlags$CleanupId"
 
        Create-Cleanup-List | ForEach-Object {
            $RegistryPath = "$RegKeyDirectory$_"

            # Create regkey to specify which files to clean. 
            if (Test-Path $RegistryPath) {
                # Set the value to 2. Any other value won't trigger a cleanup.
                Set-ItemProperty -Path $RegistryPath -Name $RegKeyString -Value 2
            }
        }
        # Run disk cleanup using the preferences created in the registry.
        $Sagerun = "/SAGERUN:$CleanupId"
        cleanmgr.exe $Sagerun
     }
}

Run-Disk-Cleanup

Good luck!

Hopefully, these scripts show some of the possible remediation actions and help get you started writing your own. We hope you enjoy using this feature, and we’d love to hear your feedback and your experiences when creating your own custom scripts!


Here’s what you missed – Five big announcements for Storage Spaces Direct from the Windows Server Summit

$
0
0

This post was authored by Cosmos Darwin, PM on the Windows Server team at Microsoft. Follow him @cosmosdarwin on Twitter.

Yesterday we held the first Windows Server Summit, an online event all about modernizing your infrastructure and applications with Windows Server. If you missed the live event, the recordings are available for on-demand viewing. Here are the five biggest announcements for Storage Spaces Direct and Hyper-Converged Infrastructure (HCI) from yesterday’s event:

#1. Go bigger, up to 4 PB

With Windows Server 2016, you can pool up to 1 PB of drives into a single Storage Spaces Direct cluster. This is an immense amount of storage! But year after year, manufacturers find ways to make ever-larger* drives, and some of you – especially for media, archival, and backup use cases – asked for more. We heard you, and that’s why Storage Spaces Direct in Windows Server 2019 can scale 4x larger!

The new maximum size per storage pool is 4 petabytes (PB), or 4,000 terabytes.

The new maximum size per storage pool is 4 petabytes (PB), or 4,000 terabytes.

The new maximum size per storage pool is 4 petabytes (PB), or 4,000 terabytes. All related capacity guidelines and/or limits are increasing as well: for example, Storage Spaces Direct in Windows Server 2019 supports twice as many volumes (64 instead of 32), each twice as large as before (64 TB instead of 32 TB). These are summarized in the table below.

All related capacity guidelines and/or limits are increasing as well.

All related capacity guidelines and/or limits are increasing as well.

* See these new 14 TB drives – whoa! – from our friends at Toshiba, Seagate, and Western Digital.

Our hardware partners are developing and validating SKUs to support this increased scale.

We expect to have more to share at Ignite 2018 in September.

#2. True two-node at the edge

Storage Spaces Direct has proven extremely popular at the edge, in places like branch offices and retail stores. For these deployments, especially when the same gear will be deployed to tens or hundreds or locations, cost is paramount. The simplicity and savings of hyper-converged infrastructure – using the same servers to provide compute and storage – presents an attractive solution.

Since release, Storage Spaces Direct has supported scaling down to just two nodes. But any two-node cluster, whether it runs Windows or VMware or Nutanix, needs some tie-breaker mechanism to achieve quorum and guarantee high availability. In Windows Server 2016, you could use a file share (“File Share Witness”) or an Azure blob (“Cloud Witness”) for quorum.

What about remote sites, field installations, or ships and submarines that have no Internet to access the cloud, and no other Windows infrastructure to provide a file share? For these customers, Windows Server 2019 introduces a surprising breakthrough: use a simple USB thumb drive as the witness! This makes Windows Server the first major hyper-converged platform to deliver true two-node clustering, without another server or VM, without Internet, and even without Active Directory.

Windows Server 2019 introduces a surprising breakthrough – the USB witness!

Windows Server 2019 introduces a surprising breakthrough – the USB witness!

Simply insert the USB thumb drive into the USB port on your router, use the router’s UI to configure the share name, username, and password for access, and then use the new -Credential flag of the Set-ClusterQuorum cmdlet to provide the username and password to Windows for safekeeping.

Insert the USB thumb drive into the port on the router, configure the share name, username, and password, and provide them to Windows for safekeeping.

Insert the USB thumb drive into the port on the router, configure the share name, username, and password, and provide them to Windows for safekeeping.

An extremely low-cost quorum solution that works anywhere.

An extremely low-cost quorum solution that works anywhere.

Stay tuned for documentation and reference hardware (routers that Microsoft has verified support this feature, which requires an up-to-date, secure version of SMB file sharing) in the coming months.

#3. Drive latency outlier detection

In response to your feedback, Windows Server 2019 makes it easier to identify and investigate drives with abnormal latency.

Windows now records the outcome (success or failure) and latency (elapsed time) of every read and write to every drive, by default. In an upcoming Insider Preview build, you’ll be able to view and compare these deep IO statistics in Windows Admin Center and with a new PowerShell cmdlet.

Windows now records the outcome (success or failure) and latency (elapsed time) of every read and write.

Windows now records the outcome (success or failure) and latency (elapsed time) of every read and write.

Moreover, Windows Server 2019 introduces built-in outlier detection for Storage Spaces Direct, inspired by Microsoft Azure’s long-standing and very successful approach. Drives with abnormal behavior, whether it’s their average or 99th percentile latency that stands out, are automatically detected and marked in PowerShell and Windows Admin Center as “Abnormal Latency” status. This gives Storage Spaces Direct administrators the most robust set of defenses against drive latency available on any major hyper-converged infrastructure platform.

Windows Server 2019 introduces built-in outlier detection for Storage Spaces Direct, inspired by Microsoft Azure’s long-standing and very successful approach.

Windows Server 2019 introduces built-in outlier detection for Storage Spaces Direct, inspired by Microsoft Azure.

Drives with abnormal behavior are automatically detected and marked in PowerShell and Windows Admin Center as “Abnormal Latency” status.

Drives with abnormal behavior are automatically detected and marked in PowerShell and Windows Admin Center.

Watch the Insider Preview release notes to know when this feature becomes available.

#4. Faster mirror-accelerated parity

Mirror-accelerated parity lets you create volumes that are part mirror and part parity. This is like mixing RAID-1 and RAID-6 to get the best of both: fast write performance by deferring the compute-intensive parity calculation, and with better capacity efficiency than mirror alone. (And, it’s easier than you think in Windows Admin Center.)

Mirror-accelerated parity lets you create volumes that are part mirror and part parity.

Mirror-accelerated parity lets you create volumes that are part mirror and part parity.

In Windows Server 2019, the performance of mirror-accelerated parity has more than doubled relative to Windows Server 2016! Mirror continues to offer the best absolute performance, but these improvements bring mirror-accelerated parity surprisingly close, unlocking the capacity savings of parity for more use cases.

In Windows Server 2019, the performance of mirror-accelerated parity has more than doubled!

In Windows Server 2019, the performance of mirror-accelerated parity has more than doubled!

These improvements are available in Insider Preview today.

#5. Greater hardware choice

To deploy Storage Spaces Direct in production, Microsoft recommends Windows Server Software-Defined hardware/software offers from our partners, which include deployment tools and procedures. They are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly.

To deploy in production, Microsoft recommends these Windows Server Software-Defined partners. Welcome Inspur and NEC!

To deploy in production, Microsoft recommends these Windows Server Software-Defined partners. Welcome Inspur and NEC!

Since Ignite 2017, the number of available hardware SKUs has nearly doubled, to 33. We are happy to welcome Inspur and NEC as our newest Windows Server Software-Defined partners, and to share that many existing partners have extended their validation to more SKUs – for example, Dell-EMC now offers 8 different pre-validated Storage Spaces Direct Ready Node configurations!

Since Ignite 2017, the number of Windows Server Software-Defined (WSSD) certified hardware SKUs and the number of components with the Software Defined Data Center (SDDC) Additional Qualifications in the Windows Server catalog has nearly doubled.

Since Ignite 2017, the number of Windows Server Software-Defined (WSSD) certified hardware SKUs and the number of components with the Software Defined Data Center (SDDC) Additional Qualifications in the Windows Server catalog has nearly doubled.

This momentum is great news for Storage Spaces Direct customers. It means more vendor and hardware choices and greater flexibility without the hassle of integrating one-off customizations. Looking to procure hardware? Get started today at Microsoft.com/WSSD.

Looking forward to Ignite 2018

Today’s news builds on announcements we made previously, like deduplication and compression for ReFS, support for persistent memory in Storage Spaces Direct, and our monthly updates to Windows Admin Center for Hyper-Converged Infrastructure. Windows Server 2019 is shaping up to be an incredibly exciting release for Storage Spaces Direct.

Join the Windows Insider program to get started evaluating Windows Server 2019 today.

We look forward to sharing more news, including a few surprises, later this year. Thanks for reading!

– Cosmos and the Storage Spaces Direct engineering team

Feedback on Storage Spaces Direct in smaller environments

$
0
0

Hello, IT Admins!

As a part of our planning process for the next release of Windows Server, we want to get your feedback! We are surveying IT Admins from small and medium businesses to get feedback on the Windows Server Storage Spaces Direct feature.

Survey: https://aka.ms/S2D_in_Smaller_Environments

We want to understand what workloads you are currently virtualizing, whether you have adopted hyper-converged infrastructure, what motivated you to switch from traditional SANs, and if you haven’t adopted it, what are some blockers that held you back.

If you are the administrator or IT decision maker for an organization with 1-250 employees, OR if you are the partner or consultant to these organizations, your feedback will help us understand why you might not be using Storage Spaces Direct.

Thanks,

Adi

 

New Storage Migration Service preview released

$
0
0

Heya, Ned here again. We released a new Windows Server 2019 Insiders Preview of the Storage Migration Service. As always, downloads and details are here:

https://aka.ms/stormigser

Go migrate yer stuff!

– Ned “rooooooo” Pyle

 

Getting started with System Insights in 10 minutes

$
0
0

This post was authored by Garrett Watumull, PM on the Windows Server team at Microsoft. Follow him @GarrettWatumull on Twitter.

In the past couple weeks, we’ve posted a few short videos to help you learn about and use System Insights, a new predictive analytics feature on Windows Server. In less than 10 minutes, you can learn all the information you need to get started and confidently manage System Insights.

1. Get started with System Insights

In this introductory video, learn more about System Insights, hear about the predictive functionality that ships with Windows Server 2019, and watch how you can quickly install System Insights using Windows Admin Center or PowerShell.

2. Learn about System Insights capabilities

In this video, learn about System Insights capabilities, which are the machine learning or statistics models that help you proactively manage your deployments. This video walks you through the steps and configuration options for managing these capabilities.

3. Create your own remediation actions

System Insights enables you to automatically kick off a mitigation script based on the prediction result of a capability. This allows you to spend less time reacting to issues in your deployments, as you can proactively and automatically respond to any prediction results.  In this video, watch us configure a basic action and learn how to create your own.

Hopefully, these videos give you the information you need to get started with System Insights. We’re excited to hear your feedback, and we look forward to announcing some new functionality in the coming weeks.

You can join the Windows Insider program to start evaluating Windows Server 2019 today.

For any other questions, visit our documentation or submit feedback using Feedback Hub or emailing system-insights-feed@microsoft.com.

Thanks for watching!

Windows 10 and Storage Sense

$
0
0

What’s new in Storage Sense?

Starting with Windows 10, Storage Sense has embarked on a path to keep your storage optimized. We’re making continuous improvements in every update. In the next Windows 10 feature update (build 17720 and later), we’re adding a new capability and making a few changes to Storage Sense’s behavior.
Before we dive in, it’s important to note that we design Storage Sense to be a silent assistant that works on your behalf without the need to configure it. Sometimes we’ll ask for your permission before we make changes to your storage. We believe in being transparent about how Storage Sense optimizes your storage for you. The content below is intended to serve as a reference.

Files On-Demand and Storage Sense

OneDrive Files On-demand gives you easy access to your OneDrive files without taking up storage space. If you have a large amount of OneDrive content that you’ve viewed and edited, you may find yourself in a situation where those files are available locally and the cached content takes up disk space. You may no longer need those files to be locally available.
Storage Sense now has the capability to automatically free up disk space by making older, unused, locally available files be available online-only. Your files will be safe in OneDrive and represented as placeholders on your device. We call this process “dehydration”. Your online-only files will still be visible on your device. When connected to the internet, you’ll be able to use online-only files just like any other file.
To enable dehydration, navigate to the Settings app from the start menu. Then select System and finally, Storage.

Image that shows where to find Storage in the Windows 10 Settings app.

Turn on Storage Sense in Storage Settings

Here, you can turn Storage Sense on by clicking the toggle button. Any files that you have not used, in the last 30 days, will be eligible for dehydration when your device runs low on free space. Storage Sense will only dehydrate files until there’s enough space freed for Windows to run smoothly. We do this so that we can keep your files available locally as much as possible.
If you’d like to change this behavior, you can make Storage Sense run periodically instead of running only when the device is low on storage. To do this, you’ll first have to click on “Change how we free up space automatically”. Next, you can change the value in “Run Storage Sense”

Storage Sense run cadence

Choose how frequently you want Storage Sense to run

If you’d like Storage Sense to aggressively dehydrate, the “Locally available cloud content” section on the same page has a dropdown to change the default value. For example, if you choose to run Storage Sense every week and select a 14-day window for Files On-Demand, Storage Sense will run once a week and identify files that you haven’t used in the past 14 days and make those files be available online only.

Locally available cloud content selector

How long should Storage Sense wait before locally available cloud content becomes online only

Files that you have marked to be always available are not affected and will continue to be available offline.

Self-activate on Low Storage

Storage Sense can now turn itself on when your device is low on storage space. Once activated, Storage Sense will intelligently run whenever your device runs low on storage space and clear temporary files that your device and applications no longer need.
Storage Sense looks for and will remove the following types of files:

  •  Temporary setup files
  • Old indexed content
  • System cache files
  • Internet cache files
  • Device Driver packages
  • System downloaded program files
  • Dated system log files
  • System error memory dump files
  • System error minidump files
  • Temporary system files
  • Dated Windows update temporary files
  • …and more.

If you’d like to clear even more space on your device, you can enable the removal of old content in the Downloads folder. Downloads folder cleanup is not turned on by default.

Download folder cleanup

Clean up old files in your Downloads folder

Manual Clean up

If you’d like to manually invoke a clean-up operation, you can click on the “Free up space now” link (shown in the red box below) on the Storage page in Settings.

Free up space now

Manually forcing a storage clean up

Storage Sense will scan your device for files that are safe to clean and give you an estimate of space that can be freed. Files are not removed until you click the “Remove Files” button.

Remove temporary files

Remove temporary files

You can choose the type of content that is cleared by Storage Sense. Note that some of this content isn’t automatically cleaned by Storage Sense. These cleanup actions may temporarily decrease the performance of your system. For example, clearing thumbnails will free up space but when you navigate to a folder with pictures in it, thumbnail previews will be recreated and thumbnails may not be available for a few moments.

Disk Cleanup Deprecation

The Disk Cleanup experience (“cleanmgr.exe”) is being deprecated. We’re retaining the Disk Cleanup tool for compatibility reasons. There’s no need to worry since Storage Sense’s functionality is a superset of what the legacy Disk Cleanup provides!

 

Jesse Rajwan contributed to this post.
Follow Aniket on LinkedIn
Viewing all 268 articles
Browse latest View live