Quantcast
Channel: Storage at Microsoft
Viewing all articles
Browse latest Browse all 268

Extending Data Deduplication to new workloads in Windows Server 2012 R2

$
0
0

This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers Data Deduplication and how it applies to the larger topic of “Transform the Datacenter.”  To read that post and see the other technologies discussed, read today’s post: “Delivering Infrastructure as a Service (IAAS).”  

In Windows Server 2012 we introduced the new Data Deduplication feature set that quickly became one of standard things to consider when deploying file servers. More space on existing hardware at no cost other than running Windows Server 2012? Seems like a pretty good deal.

Not to mention we saw great space savings on various types of real-world data at rest. Some of the most common types of data include:

 

These numbers are based on measuring the savings rates on various customer deployments of Data Deduplication on Windows Server 2012. However, we saw some interesting trends:

  • Customers were adjusting the default policies as to which files to optimize to include more data. By default, Data Deduplication only optimizes files that have not been modified in 5 days. Customers were setting it to optimize files older than 3 days and in many cases to optimize all files regardless of age.
  • Customers were attempting to optimize their running VHD libraries… which of course doesn’t quite work correctly

In both cases we see people try to put more data under Data Deduplication and to take better advantage of those huge savings seen on static VHD libraries. However, Data Deduplication in Windows Server 2012 was not really designed to deal with data that changes frequently or even is in active use.

The road to new workloads for Data Deduplication

The customer feedback we were getting showed a clear need to reduce storage costs in private clouds (see http://blogs.technet.com/b/in_the_cloud/archive/2013/07/31/what-s-new-in-2012-r2-delivering-infrastructure-as-a-service.aspx for an overview of all the other new things around storage) and specifically to extend Data Deduplication for new workloads.

Specifically we needed to start supporting storage of live VHDs for some scenarios.

It turns out that there were a few key changes that had to be made to even consider using Data Deduplication for open files:

  • The read performance was pretty good already, but the write performance needed to be improved.
  • The speed at which Data Deduplication optimizes files needed to become faster to keep up with changes (churn) in files.
  • We had to allow open files to be optimized by Data Deduplication (while it was actively being modified)

We also realized that all of this would take up resources on the server running Data Deduplication. If we were to run this on the same server as the VMs, then we’d be competing with them for resources. Especially memory. So we quickly came to the conclusion that we needed to separate out storage and computation nodes when Data Deduplication was involved with virtualization.

Of course that meant we had to use a scale out file share and therefore needed to support CSV volumes for deduplication.

Then we came to the question of how fast do we have to get all of these things working to be successful? Well… as fast as possible. However, we know that Data Deduplication has to incur some costs. So we needed real goals. It turns out that deciding that you are fast enough for all virtualization scenarios is very difficult. So we decided to take a first step with a virtualization workload that was well understood:

Data Deduplication in Windows Server 2012 R2 would support optimization of storage for Virtual Desktop Infrastructure (VDI) deployments as long as the storage and compute nodes were connected remotely.

What’s new in Data Deduplication in Windows Server 2012 R2 Preview

With the Windows Server 2012 R2 Preview, Data Deduplication is extended to the remote storage of the VDI workload:

 

CSV Volume support
 
Faster deduplication of data
 
Deduplication of open (in use) files
 
Faster read/write performance of deduplicated files

 

Is Hyper-V in general supported with a Deduplicated volume?

We spent a lot of time to ensure that Data Deduplication performs correctly on general virtualization workloads. However, we focused our efforts to ensure that the performance of optimized files is adequate for VDI scenarios. For non-VDI scenarios (general Hyper-V VMs), we cannot provide the same performance guarantees.

As a result, we do not support deduplication of arbitrary in use VHDs in Windows Server 2012 R2. However, since Data Deduplication is a core part of the storage stack, there is no explicit block in place that prevents it from being enabled on arbitrary workloads.

What benefits do we get from using Data Deduplication with VDI?

We will start with the easy one: You will save space! And of course, saving space translates into saving money. Deduplication rates for VDI deployments can range as high as 95% savings. This allows for deployments of SSD based volumes for VDI, leveraging all the improved IO characteristics while mitigating their low capacity.

This also allows for simplification of the surrounding infrastructure such as JBODs, cooling, power, etc.

On the other hand, due to the fact that Data Deduplication consolidates files, more efficient caching mechanisms are possible. This results in improving the IO characteristics of the storage subsystem for some types of operations. So not only does deduplication save money, it can make things go faster.

As a result of these, we can often stretch the VM capacity of the storage subsystem without buying additional hardware or infrastructure.

Wrap-up

Data Deduplication in Windows Server 2012 R2 enables optimization of live VHDs for the VDI workloads and allows for deduplicated CSV volumes. It also significantly improves the performance of optimization as well as IO on optimized files. This will allow better utilization of existing storage subsystems for general file servers as well as for VDI storage and simplify future infrastructure investments.

We hope you find these new capabilities as exciting as we find them and look forward to hearing from you.

To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive.


Viewing all articles
Browse latest Browse all 268

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>