Quantcast
Channel: Storage at Microsoft
Viewing all 268 articles
Browse latest View live

Announcing Public Preview of Azure Blob Storage on IoT Edge

$
0
0

This post was authored by @Arpita Duppala, PM on the Core Operating System and Intelligent Edge team. Follow her @arnuwish on Twitter.

Today, we are excited to announce the public preview of Azure Blob Storage on IoT Edge—where this is deployed as a module to IoT devices via Azure IoT Edge. It is useful for data that need to be processed real-time on the device or stored and accessed as a blob locally—which can be sent to the cloud later.

Azure Blob Storage on IoT Edge uses the Azure Storage SDK to store data locally on a local blob store. This enables the application written to store and access data on public cloud Azure, to store and access data locally with a simple change of connection string in code.

Below is the list of supported container host operating system and architecture, available in Windows and Linux.

Container Host Operating System Architecture
Raspbian Stretch ARM32
Ubuntu Server 18.04 AMD64
Ubuntu Server 16.04 AMD64
Windows 10 IoT Core (October Update) AMD64
Windows 10 IoT Enterprise (October Update) AMD64
Windows Server 2019 AMD64

 

Azure Blob Storage on IoT Edge – Version 0.5 (September 24, 2018)

In the diagram below, we have an edge device pre-installed with Azure IoT Edge runtime. It is running a custom module to process the data collected from the sensor and saving the data to the local blob store. Because it is Azure-consistent, the custom module can be developed using the Azure Storage SDK to make calls to the local blob storage.

This scenario is useful when there is a lot of data to process. For example, a farm has cameras (sensor) in which the images are processed or filtered at the edge by a custom module such that it captures notable events, like a cow running wild on a farm. It is efficient to do this processing locally because there is a lot of image data that is continuously being captured. Azure Blob Storage on IoT Edge module allows you to store and access such data efficiently.

Current Functionality:

With the current public preview module, the users can:

  • Store data locally and access the local blob store using the Azure Storage SDK.
  • Reuse the same business logic of an app written to store/access data on Azure.
  • Deploy in an IoT Edge device.
  • Use any Azure IoT Edge Tier 1 host operating system

Roadmap:

What does the future hold?

  • Automatically copy the data to Azure from IoT edge device
  • Automatically copy the data from Azure to IoT edge device
  • Additional API support
  • Early adopter deployment scenarios

More Information:

Find more information about this module at https://aka.ms/AzureBlobStorage-IotModule

Feedback:

Please share your feedback with us and let us know how we can improve. You can also let us know if you find that we are missing some major API support which is required in your scenario.

You can reach out to us at absiotfeedback@microsoft.com 

 


Hyper-converged infrastructure in Windows Server 2019 – the countdown clock starts now!

$
0
0

This post was written by Cosmos Darwin, Sr PM on the Core OS team at Microsoft. Follow him @cosmosdarwin on Twitter.

Today is an exciting day: Windows Server 2019 is now generally available! Windows Server 2019 is the second major release of Hyper-Converged Infrastructure (HCI) from Microsoft, and the biggest update to Storage Spaces Direct since its launch in Windows Server 2016.

The momentum continues

In the two years since Storage Spaces Direct first launched, we’ve been overwhelmed by the positive feedback and accelerating adoption. Organizations around the world, in every industry and every geography, are moving to Storage Spaces Direct to modernize their infrastructure. In fact, I’m delighted to share that worldwide adoption of Storage Spaces Direct has increased by +50% in just the last 6 months since we announced 10,000 clusters in March.

To our customers and our partners, thank you! Our growing team is working hard to deliver new features and improve existing ones based on your feedback. To learn more about new features for Storage Spaces Direct, including deduplication and compression, native support for persistent memory, nested resiliency for two-node clusters, increased performance and scale, and more, check out the What’s new in Windows Server 2019 docs published today.

Timeline for hardware availability

Windows Server 2019 is the first version to skip the classic Release To Manufacturing (RTM) milestone and go directly to General Availability (GA). This change is motivated by the increasing popularity of virtual machines, containers, and deploying in the cloud. But it also means the hardware ecosystem hasn’t had the chance to validate and certify systems or components before the release; instead, they start doing so today.

As before, to ensure our customers are successful and have the smoothest experience, Microsoft recommends deploying Storage Spaces Direct on hardware validated by the Windows Server Software-Defined (WSSD) program. The first wave of WSSD offers for Windows Server 2019 will launch in mid-January 2019, in about three months. We’ll share more details about the WSSD launch event soon.

Until the first wave of hardware is available, attempting to use features like Storage Spaces Direct or Software-Defined Networking (SDN) displays an advisory message and requires an extra step to configure. This is normal and expected – see KB4464776. Microsoft will remove the message for everyone immediately after the WSSD launch event in January, via Windows Update.

How to get Storage Spaces Direct in Windows Server 2019

Just like Windows Server 2016, Storage Spaces Direct is included in the Windows Server 2019 Datacenter edition license, meaning for most Hyper-V customers, it is effectively no additional cost. And just like Windows Server 2016, Microsoft supports two ways to procure and deploy hardware for Storage Spaces Direct:

  1. Build-your-own with components from the Windows Server catalog (supported)
  2. Purchase validated and ready-to-go WSSD offers from our partners (recommended)

See the answers to frequently asked questions below for more details.

Windows Server 2019 is the biggest update to Storage Spaces Direct since Windows Server 2016, and we can’t wait to see what you’ll do with it. Whether you’re deploying multiple petabytes in your core datacenter, or just two nodes in your branch office, Storage Spaces Direct gets better for everyone in Windows Server 2019.

Like you, we eagerly await the WSSD launch event in mid-January. See you there!

– Cosmos and the Storage Spaces Direct team

Frequently asked questions

Is Storage Spaces Direct in Windows Server 2019 restricted to WSSD hardware?

No. Just like with Windows Server 2016, systems and components must be listed in the Windows Server 2019 catalog with the Software-Defined Data Center (SDDC) Additional Qualifications. Any customer running Storage Spaces Direct on such hardware is eligible for production support from Microsoft. See the hardware requirements documentation.

When can I deploy Storage Spaces Direct in Windows Server 2019 into production?

Microsoft recommends deploying Storage Spaces Direct on hardware validated by the WSSD program. For Windows Server 2019, the first wave of WSSD offers will launch in mid-January 2019, in about three months.

If you choose instead to build your own with components from the Windows Server 2019 catalog with the SDDC AQs, you may be able to assemble eligible parts sooner. In this case, you can absolutely deploy into production – you’ll just need to contact Microsoft Support for instructions to work around the advisory message.

Note that whether and when hardware is certified for Windows Server 2019 is at the sole discretion of its vendor.

What can I do while I wait for mid-January?

Get hands-on today. The latest Windows Insider release of Windows Server 2019 includes nearly all new Storage Spaces Direct capabilities and improvements. In the coming days, one more Windows Insider release, based on the final 17763 build of Windows Server 2019, will be made available. It won’t show the advisory message and won’t require the extra step to configure, making it perfect for evaluation and testing.

Can I upgrade my Windows Server 2016 cluster to Windows Server 2019?

Microsoft supports in-place upgrade from Windows Server 2016 to Windows Server 2019, including Storage Spaces Direct.

Once your systems and components are listed in the Windows Server 2019 catalog with the SDDC AQs, you can upgrade. For the smoothest experience, Microsoft recommends checking with your hardware partner to ensure they’ve validated your hardware with Windows Server 2019 before you upgrade. To upgrade before mid-January 2019, you’ll need to contact Microsoft Support for instructions to work around the advisory message.

Note that whether and when hardware is certified for Windows Server 2019 is at the sole discretion of its vendor.

My project has a tight timeline. Should I deploy Windows Server 2016 instead?

Features like Storage Spaces Direct and SDN are available for immediate deployment in Windows Server 2016. Although there are significant new capabilities and improvements in Windows Server 2019, the core functionality is comparable, and in-place upgrade is supported so you can move to Windows Server 2019 later.

As of Microsoft Ignite 2018, there are over 50 ready-to-go WSSD offers available for Windows Server 2016 from over a dozen partners, giving you more choice and greater flexibility without the hassle of integrating one-off customizations. Get started today at Microsoft.com/WSSD.

Using System Insights to forecast clustered storage usage

$
0
0

This post was authored by Garrett Watumull, PM on the Windows Server team at Microsoft. Follow him @GarrettWatumull on Twitter.

In Windows Server 2019, we introduced System Insights, a new predictive analytics feature for Windows Server. System Insights ships with four default capabilities designed to help you proactively and efficiently forecast resource consumption. It collects historical usage data and implements robust data analytics to accurately predict resource usage, without requiring you to write any scripts or create custom visualizations.

System Insights is designed to run on all Windows Server instances, across physical and guest instances, across hypervisors, and across clouds. Because most Windows Server instances are unclustered, we focused on implementing storage forecasting capabilities for local storage – the volume consumption forecasting capability predicts storage consumption for local volumes, and the total storage consumption forecasting capability predicts storage consumption across all local drives.

After hearing your feedback, however, we realized we needed to extend this functionality to clustered storage. And with the latest Windows Admin Center and Windows Server GA releases, we’re excited to announce support for forecasting on clustered storage. Cluster administrators can now use System Insights to forecast clustered storage consumption.

How it works

When you install System Insights on a failover cluster, the default behavior of System Insights remains unchanged – the storage capabilities only analyze local volumes and disks. You can, however, easily enable forecasting on clustered storage, and System Insights will immediately start collecting all accessible clustered storage information:

  • If you are using Cluster Shared Volumes (CSV), System Insights collects all clustered volume and disk information in your cluster. System Insights can rely on a nice property of CSV, where each node in a cluster is presented with a consistent, distributed namespace. Or, in other words, each node can access all volumes in the cluster, even when those volumes are mounted on other nodes.
  • If you aren’t using Cluster Shared Volumes (CSV), System Insights collects all clustered disk information, but it can only collect the information about the clustered volume that is currently mounted on that node.

Once enough data has been collected, System Insights will start forecasting on clustered storage data.

Lastly, before describing how to enable this functionality, there are a couple last things to point out:

  • First, all clustered storage forecasting data is stored on node-local storage. If you want to collect clustered storage data on multiple nodes, you must enable this on each node. This ensures that you have multiple copies of the clustered storage data in case a node fails.
  • Even though clustered storage data is stored on node-local storage, the data footprint of System Insights should still be relatively modest. Each volume and disk consume 300KB and 200KB of storage respectively. You can read more about the System Insights data sources here.
  • These changes don’t affect the CPU or networking capabilities, as these capabilities analyze the server’s local CPU or network usage whether that server is clustered or stand-alone.

Setting it up

Windows Admin Center

Windows Admin Center provides a simple, straightforward method to enable forecasting on clustered storage. If you’ve enabled failover clustering on your server, you’ll see this dialog when you first open System Insights:

Clicking Install turns on data collection and forecasting for clustered storage. Alternatively, you can also use the new Clustered storage button to adjust the clustered storage forecasting settings. This button is now visible if failover clustering is enabled on a server:

Once you click on Clustered storage, you can adjust the data collection settings, as well as the specific forecasting behavior of the volume and the total storage consumption capabilities. For each storage forecasting capability, you can specify local or clustered storage predictions:

PowerShell

For those of you looking to use PowerShell instead, we’ve exposed three registry keys to enable this functionality. Together, these help you manage clustered storage data collection, volume forecasting behavior, and total storage forecasting behavior:

To turn on/off clustered data collection, use the following registry key:

  • Path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\SystemDataArchiver\
  • Name: ClusterVolumesAndDisks
  • TypeDWORD
  • Values:
    • 0: Off
    • 1: On

To adjust the behavior of the volume consumption capability, use the following registry key:

  • Path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SystemInsights\Capabilities\Volume consumption forecasting\
  • Name: ClusterVolumes
  • TypeDWORD
  • Values:
    • 0: Local volumes
    • 1: Clustered volumes
    • 2: Both

To adjust the behavior of the total storage consumption capability, use the following registry key:

  • Path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SystemInsights\Capabilities\Total storage consumption forecasting\
  • Name: ClusterVolumesAndDisks
  • TypeDWORD
  • Values:
    • 0: Local volumes and disks
    • 1: Clustered volumes and disks

You can use the New-ItemProperty or the Set-ItemProperty cmdlets to configure these registry keys. For example:

# Create the registry path and then start collecting clustered storage data.

$ClusteredStoragePath = “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\SystemDataArchiver”
New-ItemPath -Path $ClusteredStoragePath
New-ItemProperty -Path $ClusteredStoragePath -Name ClusterVolumesAndDisks -Value 1 -PropertyType DWORD

# Create the registry path and then forecast on both clustered and local volumes.

$VolumeForecastingPath = “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SystemInsights\Capabilities\Volume consumption forecasting”
New-ItemPath -Path $VolumeForecastingPath
New-ItemProperty -Path $VolumeForecastingPath -Name ClusterVolumes -Value 2 -PropertyType DWORD

# Update the registry key so the volume forecasting capability only predicts clustered volume usage.

Set-ItemProperty -Path $VolumeForecastingPath -Name ClusterVolumes -Value 1 -PropertyType DWORD

Conclusion

We’re really excited to introduce this new functionality in System Insights and Windows Admin Center. With the latest releases, cluster users can now use System Insights to proactively predict clustered storage consumption, and these settings can be managed both in Windows Admin Center and PowerShell.

Please keep providing feedback, so we can keep adding new functionality to System Insights that’s relevant to you:

  • UserVoice
  • Email: system-insights-feed@microsoft.com

The new HCI industry record: 13.7 million IOPS with Windows Server 2019 and Intel® Optane™ DC persistent memory

$
0
0

Written by Cosmos Darwin, Senior PM on the Core OS team at Microsoft. Follow him on Twitter @cosmosdarwin.

intel-microsoft-partnership

 

 

Hyper-converged infrastructure is an important shift in datacenter technology. By moving away from proprietary storage arrays to an architecture built on industry-standard interconnects, x86 servers, and local drives, organizations can benefit from the latest cloud technology faster and more affordably than ever before.

Watch this demo from Microsoft Ignite 2018:

Intel® Optane™ DC persistent memory delivers breakthrough storage performance. To go with the fastest hardware, you need the fastest software. Hyper-V and Storage Spaces Direct in Windows Server 2019 are the foundational hypervisor and software-defined storage of the Microsoft Cloud. Purpose-built for efficiency and performance, they’re embedded in the Windows kernel and meticulously optimized. To learn more about hyper-converged infrastructure powered by Windows Server, visit Microsoft.com/HCI.

For details about this demo, including some additional results, read on!

Hardware

iops-hardware

The reference configuration Intel and Microsoft used for this demo.

  • 12 x 2U Intel® S2600WFT server nodes
  • Intel® Turbo Boost ON, Intel® Hyper-Threading ON

Each server node:

  • 384 GiB (12 x 32 GiB) DDR4 2666 memory
  • 2 x 28-core future Intel® Xeon® Scalable processor
  • 5 TB Intel® Optane™ DC persistent memory as cache
  • 32 TB NVMe (4 x 8 TB Intel® DC P4510) as capacity
  • 2 x Mellanox ConnectX-4 25 Gbps

optane-module

Intel® Optane™ DC modules are DDR4 pin compatible but provide native storage persistence.

Software

Windows OS. Every server node runs Windows Server 2019 Datacenter pre-release build 17763, the latest available on September 20, 2018. The power plan is set to High Performance, and all other settings are default, including applying relevant side-channel mitigations. (Specifically, mitigations for Spectre v1 and Meltdown are applied.)

Storage Spaces Direct. Best practice is to create one or two data volumes per server node, so we create 12 volumes with ReFS. Each volume is 8 TiB, for about 100 TiB of total usable storage. Each volume uses three-way mirror resiliency, with allocation delimited to three servers. All other settings, like columns and interleave, are default. To accurately measure IOPS to persistent storage only, the in-memory CSV read cache is disabled.

Hyper-V VMs. Ordinarily we’d create one virtual processor per physical core. For example, with 2 sockets x 28 cores we’d assign up to 56 virtual processors per server node. In this case, to saturate performance took 26 virtual machines x 4 virtual processors each = 104 virtual processors. That’s 312 total Hyper-V Gen 2 VMs across the 12 server nodes. Each VM runs Windows and is assigned 4 GiB of memory.

VHDXs. Every VM is assigned one fixed 40 GiB VHDX where it reads and writes to one 10 GiB test file. For the best performance, every VM runs on the server node that owns the volume where its VHDX file is stored. The total active working set, accounting for three-way mirror resiliency, is 312 x 10 GiB x 3 = 9.36 TiB, which fits comfortably within the Intel® Optane™ DC persistent memory.

Benchmark

There are many ways to measure storage performance, depending on the application. For example, you can measure the rate of data transfer (GB/s) by simply copying files, although this isn’t the best methodology. For databases, you can measure transactions per second (T/s). In virtualization and hyper-converged infrastructure, it’s standard to count storage input/output (I/O) operations per second, or “IOPS” – essentially, the number of reads or writes that virtual machines can perform.

More precisely, we know that Hyper-V virtual machines typically perform random 4 kB block-aligned IO, so that’s our benchmark of choice.

How do you generate 4 kB random IOPS?

  • VM Fleet. We use the open-source VM Fleet tool available on GitHub. VM Fleet makes it easy to orchestrate running DISKSPD, the popular Windows micro-benchmark tool, in hundreds or thousands of Hyper-V virtual machines at once. To saturate performance, we specify 4 threads per file (-t4) with 16 outstanding IOs per thread (-o16). To skip the Windows cache manager, we specify unbuffered IO (-Su). And we specify random (-r) and 4 kB block-aligned (-b4k). We can vary the read/write mix by the -w

In summary, here’s how DISKSPD is being invoked:

.\diskspd.exe -d120 -t4 -o16 -Su -r -b4k -w0 [...]

How do you count 4 kB random IOPS?

  • Windows Admin Center. Fortunately, Windows Admin Center makes it easy. The HCI Dashboard features an interactive chart plotting cluster-wide aggregate IOPS, as measured at the CSV filesystem layer in Windows. More detailed reporting is available in the command-line output of DISKSPD and VM Fleet.

windows-admin-center

The HCI Dashboard in Windows Admin Center has charts for IOPS and IO latency.

The other side to storage benchmarking is latency – how long an IO takes to complete. Many storage systems perform better under heavy queuing, which helps maximize parallelism and busy time at every layer of the stack. But there’s a tradeoff: queuing increases latency. For example, if you can do 100 IOPS with sub-millisecond latency, you may be able to achieve 200 IOPS if you accept higher latency. This is good to watch out for – sometimes the largest IOPS benchmark numbers are only possible with latency that would otherwise be unacceptable.

Cluster-wide aggregate IO latency, as measured at the same layer in Windows, is charted on the HCI Dashboard too.

Results

Any storage system that provides fault tolerance necessarily makes distributed copies of writes, which must traverse the network and incurs backend write amplification. For this reason, the absolute largest IOPS benchmark numbers are typically achieved with reads only, especially if the storage system has common-sense optimizations to read from the local copy whenever possible, which Storage Spaces Direct does.

With 100% reads, the cluster delivers 13,798,674 IOPS.

iops-record

Industry-leading HCI benchmark of over 13.7M IOPS, with Windows Server 2019 and Intel® Optane™ DC persistent memory.

If you watch the video closely, what’s even more jaw-dropping is the latency: even at over 13.7 M IOPS, the filesystem in Windows is reporting latency that’s consistently less than 40 µs! (That’s the symbol for microseconds, one-millionths of a second.) This is an order of magnitude faster than what typical all-flash vendors proudly advertise today.

But most applications don’t just read, so we also measured with mixed reads and writes:

With 90% reads and 10% writes, the cluster delivers 9,459,587 IOPS.

In certain scenarios, like data warehouses, throughput (in GB/s) matters more, so we measured that too:

With larger 2 MB block size and sequential IO, the cluster can read 535.86 GB/s!

Here are all the results, with the same 12-server HCI cluster:

Run Parameters Result
Maximize IOPS, all-read 4 kB random, 100% read 13,798,674 IOPS
Maximize IOPS, read/write 4 kB random, 90% read, 10% write 9,459,587 IOPS
Maximize throughput 2 MB sequential, 100% read 535.86 GB/s

Conclusion

Together, Storage Spaces Direct in Windows Server 2019 and Intel® Optane™ DC persistent memory deliver breakthrough performance. This industry-leading HCI benchmark of over 13.7M IOPS, with consistent and extremely low latency, is more than double our previous industry-leading benchmark of 6.7M IOPS. What’s more, this time we needed just 12 server nodes, 25% fewer than two years ago.

iops-gains

More than double our previous record, in just two years, with fewer server nodes.

It’s an exciting time for Storage Spaces Direct. Early next year, the first wave of Windows Server Software-Defined (WSSD) offers with Windows Server 2019 will launch, delivering the latest cloud-inspired innovation to your datacenter, including native support for persistent memory. Intel® Optane™ DC persistent memory comes out next year too – learn more at Intel.com/OptaneDCPersistentMemory.

We’re proud of these results, and we’re already working on what’s next. Hint: even bigger numbers!

Cosmos and the Storage Spaces Direct team at Microsoft,
and the Windows Operating System team at Intel

intel-microsoft-partnership

Storage Migration Service Log Collector Available

$
0
0

Heya folks, Ned here again. We have put together a log collection script for the Storage Migration Service, if you ever need to troubleshoot or work with MS Support.

https://aka.ms/smslogs

It will grab the right logs then drop them into a zip file. Pretty straightforward, see the readme for instructions. It is an open source project with MIT license, feel free to tinker or fork for your own needs. I will eventually move it to its own github project, but for now it’s under me.

You will, of course, never need this. 😀

– Ned “get real” Pyle

 

Chelsio RDMA and Storage Replica Perf on Windows Server 2019 are 💯

$
0
0

Heya folks, Ned here again. Some recent Windows Server 2019 news you may have missed: Storage Replica performance was greatly increased over our original numbers. I chatted about this at earlier Ignite sessions, but when we finally got to Orlando, I was too busy talking about the new Storage Migration Service.

To make up for this, the great folks at Chelsio decided to setup servers and their insane 100Gb T62100-CR iWARP RDMA network adapters, then test the same replication on the same hardware with both Windows Server 2016 and Windows Server 2019; apples and apples, baby. If you’ve been in a coma since 2012, Windows Server uses RDMA for CPU-offloaded SMB Direct high performance data transfer over SMB3. iWARP brings an additional advantage of metro-area ranges while still using TCP for simplified configuration.

The TL; DR is: Chelsio iWARP 100Gb – with SMB 3.1.1 and SMB Direct providing the transport – for Storage Replica is so low latency and so high bandwidth that you can stop worrying about your storage outrunning it. 😂 No matter how much NVME SSD we through at the workload, the storage ran out of IO before the Chelsio network did. It’s such an incredible flip from most of my networking life. We live in magical networking times.

In these tests we used a pair of SuperMicro servers, one with five striped Intel NVME SSDs, one with five striped Micron NVME SSDs. Each had 24 3Ghz Xeon cores and 128GB of memory. They were installed with both Windows Server 2016 RTM and Windows Server 2019 build 17744. A single 1TB volume was formatted on the source storage. Each server got a single-port 100Gb T62100-CR iWARP RDMA network adapter and the latest Chelsio Unified Wire drivers.

Let’s see some numbers and charts!

Initial Block Copy

We started with initial block copy, where Storage Replica must copy every single disk bock from a source partition to a destination partition. Even though the Chelsio iWARP adapter is pushing 94Gb per second at sustained rate – which is as fast as this storage will send and receive CPU overhead is only 5% thanks to offloading. And even 5 RAID-0 NVME SSDs at 100% read on the source and 100% write on the destination couldn’t completely fill that single 100Gb pipe. With SMB multichannel and another RDMA port turned on – this adapter has two – this would have been even less utilized.

That entire 1TB volume replicated in 95 seconds.

People talk about the coming 5G speed revolution and I can’t help but laugh my butt off, tbh. 😁

Continuous Replication

There shouldn’t be much initial sync performance difference between Windows Server 2016 and 2019 because the logs are not used at that phase of replication. They only kick in when block copy is done and you are performing writes on the source. So at this phase two sets of tests were run with the same exact hardware and drivers, but now a few times with Windows Server 2016’s v1 log and a few times with Windows Server 2019’s v1.1 tuned up log.

To perform the test we used Diskspd, a free IO workload creation tool we provide for testing and validation. This is the tool used to ensure that Microsoft Windows Server Software Defined HCI clusters sold by Dell, HPE, DataOn, Fujitsu, Supermicro, NEC, Lenovo, QCT, and others to meet the logo standards for performance and reliability under stress test via a test suite we call “VM Fleet.”

OK, enough Storage Spaces Direct shilling for Cosmos, let’s see how the perf changed between Storage Replica in Windows Server 2016 (aka RS1) and Windows Server 2019 (aka RS5).

The lower orange line shows Windows Server 2016 performance as we hit the replicated volume on the source with 4K, 8K, then 16K IO writes. The upper green line for Windows Server 2019 shows improvements from ~2-3X depending on size for MB per second (that’s a big B for bytes, not bits) and you can see we tuned as carefully as possible for the common 8K IO size. Because we’re using extra wide, low-latency, high-throughput, low-CPU-impacting Chelsio NICs, you’ll never have any bottlenecks due to the network and it will all be dedicated to the actual workload you’re running, not just to being a special “replication network” that are so common in the old world of regular low-bandwidth 1 and 10 Gb TCP dumb adapters.

The Big Sum Up

Storage Replica with Chelsio T6 provides datacenters with high performance data replication over local and remote locations, with the ease of use of TCP instead of Ethernet, and ensuring that your most critical workloads are protected with synchronous replication. Chelsio makes a cost-effective and secure data recovery solution that should appeal to any-sized datacenter or org.

The bottom line: we’re entered a new age for moving all that data around and its name is iWARP. Get on the rocket, IT pros.

Until next time,

– Ned “RDMA good, old networking bad. Me simple man” Pyle

Windows 10 and reserved storage

$
0
0

Reserving disk space to keep Windows 10 up to date


Windows Insiders: To enable this new feature now, please see the last section “Testing out Storage Reserve” and complete the quest


Starting with the next major update we’re making a few changes to how Windows 10 manages disk space. Through reserved storage, some disk space will be set aside to be used by updates, apps, temporary files, and system caches. Our goal is to improve the day-to-day function of your PC by ensuring critical OS functions always have access to disk space. Without reserved storage, if a user almost fills up her or his storage, several Windows and application scenarios become unreliable. Windows and application scenarios may not work as expected if they need free space to function. With reserved storage, updates, apps, temporary files, and caches are less likely to take away from valuable free space and should continue to operate as expected. Reserved storage will be introduced automatically on devices that come with version 1903 pre-installed or those where 1903 was clean installed. You don’t need to set anything up—this process will automatically run in the background. The rest of this blog post will share additional details on how reserved storage can help optimize your device.

How does it work?

When apps and system processes create temporary files, these files will automatically be placed into reserved storage. These temporary files won’t consume free user space when they are created and will be less likely to do so as temporary files increase in number, provided that the reserve isn’t full. Since disk space has been set aside for this purpose, your device will function more reliably. Storage sense will automatically remove unneeded temporary files, but if for some reason your reserve area fills up Windows will continue to operate as expected while temporarily consuming some disk space outside of the reserve if it is temporarily full.

Windows Updates made easy

Updates help keep your device and data safe and secure, along with introducing new features to help you work and play the way you want. Every update temporarily requires some free disk space to download and install. On devices with reserved storage, update will use the reserved space first.

When it’s time for an update, the temporary unneeded OS files in the reserved storage will be deleted and update will use the full reserve area. This will enable most PCs to download and install an update without having to free up any of your disk space, even when you have minimal free disk space. If for some reason Windows update needs more space than is reserved, it will automatically use other available free space. If that’s not enough, Windows will guide you through steps to temporarily extend your hard disk with external storage, such as with a USB stick, or how to free up disk space.

How much of my storage is reserved?

In the next major release of Windows (19H1), we anticipate that reserved storage will start at about 7GB, however the amount of reserved space will vary over time based on how you use your device. For example, temporary files that consume general free space today on your device may consume space from reserved storage in the future. Additionally, over the last several releases we’ve reduced the size of Windows for most customers. We may adjust the size of reserved storage in the future based on diagnostic data or feedback. The reserved storage cannot be removed from the OS, but you can reduce the amount of space reserved. More details below.

The following two factors influence how reserved storage changes size on your device:

  • Optional features. Many optional features are available for Windows. These may be pre-installed, acquired on demand by the system, or installed manually by you. When an optional feature is installed, Windows will increase the amount of reserved storage to ensure there is space to maintain this feature on your device when updates are installed. You can see which features are installed on your device by going to Settings > Apps > Apps & features > Manage optional features. You can reduce the amount of space required for reserved storage on your device by uninstalling optional features you are not using.
  • Installed Languages. Windows is localized into many languages. Although most of our customers only use one language at a time, some customers switch between two or more languages. When additional languages are installed, Windows will increase the amount of reserved storage to ensure there is space to maintain these languages when updates are installed. You can see which languages are installed on your device by going to Settings > Time & Language > Language. You can reduce the amount of space required for reserved storage on your device by uninstalling languages you aren’t using.

Follow these steps to check the reserved storage size: Click Start > Search for “Storage settings” > Click “Show more categories” > Click “System & reserved” > Look at the “Reserved storage” size.

Testing out reserved storage

This feature is available to Windows Insiders running Build 18298 or newer.

Step 1: Become a Windows Insider.

The Windows Insider Program brings millions of people around the world together to shape the next evolution of Windows 10. Become an Insider to gain exclusive access to upcoming Windows 10 features and the ability to submit feedback directly to Microsoft Engineers. Learn how to get started: Windows Insiders Quick Start

Step 2: Complete this quest to start using this feature.


Aaron Lower contributed to this post.
Follow Aaron Lower on LinkedIn
Follow Jesse Rajwan on LinkedIn

Azure Blob Storage on IoT Edge now includes Auto-Tiering and Auto-Prune functionalities

$
0
0

This post was authored by @Arpita Duppala, PM on the High Availability and Storage team. Follow her @arnuwish on Twitter.

Azure Blob Storage on IoT Edge is a light-weight Azure consistent module which provides local block blob storage, available in public preview. We are excited to introduce auto-tiering and auto-prune functionalities to our “Azure Blob Storage on IoT Edge” module. Currently both these new features are only available for Linux AMD64 and Linux ARM32, support for Windows AMD64 is coming soon.

Auto-tiering is a configurable functionality, which allows you to automatically upload the data from your local blob storage to Azure with intermittent internet connectivity support. It allows you to:

  1. Turn ON/OFF the tiering feature
  2. Choose the order in which the data will be uploaded to Azure like FIFO or LIFO
  3. Specify the Azure Storage account where the data will be uploaded.
  4. Specify the containers you want to upload to Azure.
  5. Do full blob tiering(using Put Blob operation) and block level tiering (using Put Block and Put Block List operations).

When your blob consists of blocks, it uses block-level tiering to upload your data to Azure. Here are some of the common scenarios:

  1. Your application updates some blocks of a previously uploaded blob, this module will upload only the updated blocks and not the whole blob.
  2. The module is uploading blob and internet connection goes away, when the connectivity is back again it will upload only the remaining blocks and not the whole blob.

Auto-Prune (auto-expiration) is a configurable functionality where this module automatically deletes your blobs from local blob storage when TTL(Time to Live) expires. It allows you to:

  1. Turn ON/OFF the auto-prune (auto-expiration) feature
  2. Specify the TTL in minutes

Video

 

Azure Blob Storage on IoT Edge – Version 1.1 (March 07, 2019)

In the diagram below, we have an edge device pre-installed with Azure IoT Edge runtime. It is running a custom module to process the data collected from the sensor and saving the data to the local blob storage account. Because it is Azure-consistent, the custom module can be developed using the Azure Storage SDK to make calls to the local blob storage. Then it will automatically upload the data from specified containers to Azure while making sure your IoT Edge device does not run out of space.

This scenario is useful when there is a lot of data to process. For example, data from industries who captures survey and behavioral data, research data, financial data, hospital data and so on. It is efficient to do the processing of data locally because there is a lot of data that is continuously being captured. Azure Blob Storage on IoT Edge module allows you to store and access such data efficiently, process if required, and then automatically upload that data for you to Azure and automatically deletes the old data from IoT Edge device to make space for new data.

Current Functionality:

With the current public preview module, the users can:

  • Store data locally and access the local blob storage account using the Azure Storage SDK.
  • Auto-tiering from IoT Edge device to Azure
  • Auto-prune (auto-expiration) of the data in IoT Edge device
  • Reuse the same business logic of an app written to store/access data on Azure.
  • Deploy multiple instances in an IoT Edge device.
  • Use any Azure IoT Edge Tier 1 host operating system

Next Steps:

  • Releasing auto-tiering and auto-pruning features for Windows IoT
  • Integration with EventGrid, to light up the event-driven computing scenario on IoT Edge platform
  • Reverse tiering: On-demand tier back the data from Azure to IoT edge device
  • Support more sophisticated tiering and pruning policies, based on early customer feedback
  • Expanding the scope of Azure Storage consistency, e.g. Append Blob support

More Information:

Find more information about this module at https://aka.ms/AzureBlobStorage-IotModule

Feedback:

Your feedback is very important to us, to make this module and its features useful and easy to use. Please share your feedback and let us know how we can improve.

You can reach out to us at absiotfeedback@microsoft.com 


Viewing all 268 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>