Quantcast
Channel: Storage at Microsoft
Viewing all articles
Browse latest Browse all 268

Hardware options for evaluating Storage Spaces Direct in Technical Preview 4

$
0
0

Note: this post originally appeared on https://aka.ms/clausjor by Claus Joergensen.

Hello, Claus here again. Right now I am at terminal B9 at Copenhagen Airport starting my trip back to the United States. This time I would like to talk a bit about options for evaluating Storage Spaces Direct in Windows Server Technical Preview 4.

You have three options for evaluating Storage Spaces Direct in Technical Preview 4:

  1. Hyper-V Virtual machines
  2. Validated server configurations from our partners
  3. Existing hardware that meets the requirements

Hyper-V Virtual Machines

Using Hyper-V virtual machines is a quick and simple way to get started with Storage Spaces Direct. You can use it to get a basic understanding of how to set up and manage Storage Spaces Direct. Having said that, you will not be able to experience all features or the full performance of Storage Spaces Direct. To evaluate Storage Spaces Direct, you will need at least four virtual machines, each with at least two data disks. For more information on how to do this, see Testing Storage Spaces Direct using Windows Server 2016 virtual machines.

Note, make sure to not use the ‘processor compatibility’ option on Hyper-V virtual machines used for Storage Spaces Direct. Processor compatibility masks certain processor capabilities and will prevent using Storage Spaces Direct, even if the physical processor supports the required capabilities.

Validated server configurations from our partners

We are working closely with our hardware partners to define and validate server configurations for Storage Spaces Direct. Using these hardware configurations is the best option for evaluating Storage Spaces Direct as we are working closely with them to validate that these work well with Storage Space Direct and you can experience the full feature set as well as the performance potential.

Several of our partners are ready to help you. Below are links to our partners detailing the hardware configuration, how to purchase and deploy the hardware:

Dell

Fujitsu

Lenovo

Once the hardware and Windows Server Technical Preview 4 is deployed, see the Storage Spaces Direct experience guide to complete the deployment.

Existing Hardware

We highly recommend using server configurations from our partners, that are in process of being validated, as we have worked closely with them to ensure they function properly and provide the best overall experience. If it is not possible to use one of these configurations, you may be able to evaluate Storage Spaces Direct in Technical Preview 4 with your existing hardware if it meets the required hardware and configuration requirements.

Note: This is not a statement of support for your particular configurations; these requirements are current as of Technical Preview 4 and might change.

Configuration

Storage Spaces Direct requires at least four servers that are expected to be of the same configuration, meaning identical CPU and memory configuration, identical network interface cards, storage controllers and devices. The servers run the same software load and are configured as a Windows Server Failover Cluster.

Using at least four servers (up to 16) provides the best storage resiliency and availability, and it satisfies the requirements for both mirrored configurations with 2 or 3 copies of data and for dual parity with erasure coded data. The below table outlines the resiliency for these storage configurations:

image

CPU

The servers in a Storage Spaces Direct configuration are generally expected to be a dual-socket CPU configuration to provide for the best flexibility and equipped with modern CPUs (Intel® Xeon® Processor E5 v3 Family). The CPU requirements depend on the deployment mode.

In the disaggregated deployment mode (Scale-Out File Server mode), the CPU is primarily consumed by storage and network IO, but is also used by advanced storage operations such as erasure coding etc.

In the hyper-converged deployment mode (virtual machines hosted on the same cluster as S2D), the CPU will provide for the VM workload as well as the storage and networking requirements. This mode will generally require more CPU horsepower, so more cores and faster processers will provide for more VMs to be hosted on the system.

Memory

The recommend minimum is 128GB, which allows for the best memory performance (balance with the number of memory channels) and provides for memory to be used by the base operating system and the Software Storage Bus cache in Storage Spaces Direct. For more information on the Software Storage Bus Cache see this blog post.

The 128GB memory amount would provide for the disaggregated deployment mode or a hyper-converged deployment mode with a smaller number of VM. Hyper-converged deployments with larger number of VMs would require additional memory depending on the number of VMs and how much memory each VM consumes.

Network interface cards

Storage Spaces Direct requires a minimum of one 10GbE network interface card (NIC) per server.

Most configurations, like a general purpose hyper-converged configuration will perform most efficiently and reliably using 10+ GbE NIC with Remote Direct Memory Access (RDMA) capability. RDMA should be either RoCE (RDMA over Converged Ethernet) or iWARP (Internet Wide Area RDMA).

If the configuration is primarily for backup or archive like workloads (sequential large IO) it can be a 10GbE network interface card (NIC) without Remote Direct Memory Access (RDMA) capability.

In both cases, a single, dual-ported NIC provides the best performance and resiliency to network connectivity issues.

You can use the following commands to inspect the link speed and RDMA capabilities of your network adapter:

Get-NetAdapter | FT Name, InterfaceDescription, LinkSpeed -autosize
Name                 InterfaceDescription                       LinkSpeed
—-                 ——————–                       ———
SLOT 6 2             Mellanox ConnectX-3 Ethernet Adapter #2    10 Gbps
SLOT 6               Mellanox ConnectX-3 Ethernet Adapter       10 Gbps

Get-NetAdapterRDMA | FT Name, InterfaceDescription, Enabled -autosize
Name                 InterfaceDescription                    Enabled
—-                 ——————–                    ——-
SLOT 6 2             Mellanox ConnectX-3 Ethernet Adapter #2    True
SLOT 6               Mellanox ConnectX-3 Ethernet Adapter       True

 

Network configuration and switches

The simple diagram below captures the available switch configurations for a four server cluster with variations of NIC and NIC port. Network deployments #2 and #3 are strongly preferred for best network resiliency and performance. Each network interface of the server should be within its own subnet. Therefore, for diagram #1 there will be one IP subnet for the servers. For diagrams #2 and #3 there will be two IP subnets and each server will be connected to both. IP subnet separation is needed for the proper use of both network interfaces in the Windows Failover clustering configuration. If the RoCE is used, the physical switches must be configured properly to support RoCE and the appropriate DataCenter Bridging (DCB) and Priority Flow Control (PFC) settings.

The network switch depends on the choice of Network Interface (NIC) used at the server.

If non-RDMA NICs are chosen, then the network switch needs to meet basic Windows Server requirements for Network Layer 2 support.

For RDMA capable NICs, the RDMA type of the NIC will have additional requirements of the switch:

  • For iWARP capable NICs, same as non-RDMA capable NICs
  • For RoCE capable NICs, the network switches must provide Enhanced Traffic Selection (802.1Qaz), Priority Based Flow Control (802.1p/Q and 802.1Qbb)
  • Mappings of TC class markings between L2 domains must be configured between switches that carry RDMA traffic.
  • Storage Devices

    Each server in a Storage Spaces Direct configuration must have the same total number of storage devices and if the server has a mix of storage device types (e.g. SSD or HDD) then the number of the particular type of device must be the same in each server.

    If a storage configuration has both solid state disk (e.g. NVMe flash or SATA SSDs) and hard disk drives (HDDs), upon configuration, the solid state disk devices (performance devices) will be used for read and write-back caching and the hard disk drives (capacity devices) will be used for data storage. If a storage configuration is all solid state disk (e.g. NVMe SSD and SATA SSD), upon configuration, the NVMe SSD devices (performance devices) will be used for write-back caching and the SATA SSD (capacity device) will be used for data storage. See the Storage Spaces Direct experience guide for how to configure for these storage configurations.

    Each server must be configured with at least 2 performance devices and 4 capacity devices. The number of capacity devices must be a multiple of the number of performance devices. The below table shows some example configuration for servers with 12 drive bays

    image

    Depending on the server design the NVMe device (solid state disk connected via PCI Express) might either be seated directly into a PCIe slot or PCIe connectivity is extended into one or more drives bays enabling the NVMe device to be seated in those bays. Seating NVMe directly into PCIe allows the drive bays, otherwise used for NVMe devices, to host more capacity device for an increase in the total number of storage devices.

    It is important to use flash-based storage devices with sufficient endurance. Endurance is often expressed as ‘drive writes per day’ or DWPD. If the endurance is too low the flash devices are likely to fail, or significantly throttle write IO, sooner than expected. If the endurance is too high the flash devices will be fine, but you will have wasted money on these more expensive devices. Calculating the necessary endurance can be tricky, as there are many factors involved, including actual daily write churn, read operations resulting in read cache churn and write amplification as a result of the desired resiliency. Note that in an all-flash configuration with NVMe + SSD the NVMe devices will absorb the majority of the writes and thus SSDs with lower endurance can be used and will result in lower cost.

    Storage Device Connectivity

    Storage Spaces Direct supports three storage device attach types: NVMe, SATA or SAS. NVMe devices are connected via PCI Express (PCIe). For the SATA and SAS devices, these can be either SSDs or HDDs. All SATA and SAS devices must be attached to a SAS Host Bus Adapter (HBA). This HBA must be a “simple” HBA, which means the devices shows as SAS devices in Windows Server.

    The HBA must be attached to a SAS Expander and then the SATA or SAS devices must be attached to the SAS Expander. The following (very simplified) diagram, captures the basic idea of storage device connectivity.

    You can use the following command to inspect the storage device media type and bus type in your system:

    Get-PhysicalDisk | FT FriendlyName, Mediatype, BusType -autosize
    FriendlyName          MediaType BusType
    ————          ——— ——-
    NVMe INTEL SSDPEDMD01 SSD       NVMe
    NVMe INTEL SSDPEDMD01 SSD       NVMe
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS
    ATA ST4000NM0033-9ZM  HDD       SAS

    The devices must show with bustype as either SAS (even for SATA devices) or NVMe (you can ignore your boot and system devices). If the devices show with bustype SATA, it means they are connected via a SATA controller which is not supported. If the devices show as bustype RAID, it means it is not a “simple” HBA, but a RAID controller which is not supported. In addition the devices must show with the accurate media type as either HDD or SSD.

    In addition, all devices must have a unique disk signature. The devices must show a unique device ID for each device. If the devices show the same ID, the disk signature is not unique, which is not supported.

    Finally, the server or the external storage enclosure (if one is used) must meet the Windows Server requirements for Storage Enclosures.

    You can use the following command to inspect the presence of storage enclosures in your system:

    Get-StorageEnclosure | FT FriendlyName
    FriendlyName
    ————
    DP BP13G+EXP

    There must be at least one enclosure listed per SAS HBA in the cluster. If the system has any NVMe devices there will also be one per server, with vendor MSFT.

    In addition, all storage enclosures must have a unique ID. If the enclosures show the same ID, the enclosure ID is not unique, which is not supported.

    Storage Spaces Direct does not support Multipath I/O (MPIO) for SAS storage device configurations. Single port SAS device configurations can be used.

    Summary of requirements
    • All Servers and Server components must be Windows Server Certified
    • A minimum of 4 servers and a maximum of 16 servers
    • Dual-socket CPU and 128GB memory
    • 10GbE or better networking bitrate must be used for the NICs
    • RDMA capable NICs are strongly recommended for best performance and density
    • If RDMA capable NICs are used, the physical switch must meet the associated RDMA requirements
    • All servers must be connected to the same physical switch (or switches as per example 3 above)
    • Minimum 2 performance devices and 4 capacity devices per server with capacity devices being a multiple of performance devices
    • Simple HBA required for SAS and SATA device. RAID controllers or SAN/iSCSI devices are not supported.
    • All disk devices and enclosures must have a unique ID
    • MPIO or physically connecting disk via multiple paths is not supported
    • The storage devices in each server must have one of the following configurations
      • NVMe + SATA or SAS SSD
      • NVMe + SATA or SAS HDD
      • SATA or SAS SSD + SATA or SAS HDD

    That is it from me this time.


Viewing all articles
Browse latest Browse all 268

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>