Storage Spaces Direct – Lab Environment Setup

Standard lab setup to test Storage Spaces Direct in a hyperconverged environment.

This article discusses the lab configuration built to test the features and operation of Windows 2016 Spaces Direct as a complete server infrastructure.

Hyper-Converged Lab with Storage Spaces Direct

Argon Systems has partnered with hardware technology companies Supermicro and Mellanox to build a reference architecture for Microsoft Windows Spaces Direct. Spaces Direct (S2D) has specific hardware requirements to operate and has support for new technologies including NVMe devices in a SSD form factor and TPM 2.0.

The lab created incorporates the latest technologies from Mellanox including their 100GbE switches and ConnectX 4 Cards. The server technologies from Supermicro include support for NVMe SSD drives which plug directly onto the front drive slots on the servers as well as the exact required HBA hardware and TPM 2.0 hardware modules.

Microsoft has very specific requirements for hardware compatibility with Windows and as of this writing these requirements have not been ratified as a certified hardware list.

Lab Overview

Hyperconvergence Lab Overview

Lab Components


SN2700 Mellanox Switch

Mellanox Network Switch

  • SN2700 36 Port 100 GbE Switch

Network Cards

  • 2 port 50Gbe Network Card per each server
  • Mellanox ConnectX 4 50GbE Network Cards – 2 x 50Gbe Ports

KB-1a-3 Server Front

Servers

  • 2 x Supermicro Twin Servers
  • 2.0 TPM Module (sold separately)
  • 256GB Memory
  • Dual Intel E5-2680 CPUs

KB-1a-4 nvme ssd

MNVe Drives

  • 16 800GB NVMe Drives – 4 per Server node
  • Model:  Intel P3700 NVMe SSD 800GB

KB-1a-5 HDD Seagate

Hard Drives

  • 32 2TB SATA  2.5”  Hard Drives
  • Model:  2TB, 7200 RPM, SATA III, 2.5″ Hard Drive, 512E

Snapshots of the Actual Lab

KB-1a-6 Switch Front

The Mellanox switch connected with breakout cables.  The 100GbE ports on the switch are split into 2 50GbE connections that connect to the 50GbE NIC card ports.

KB-1a-7 Cables

Cable:  MCP7H00-xxxx cable

Network Interface Card:  ConnectX-4 MXCX416-GCAT PCI 3 x16 2 Port 50GbE

Rear view of the Supermicro SYS-2028TP-DNCTR server cabled and powered up.

KB-1a-8 Server rear

Rear view of the Supermicro SYS-2028TP-DNCTR server cabled and powered up.

Cabling Diagram

The switch is configured as two switches to simulate with a switch .  As of this writing the best switch has not been released to the market.

KB-1a-9-Wiring Diagram

9 thoughts on “Storage Spaces Direct – Lab Environment Setup”

  1. Robert Keith

    The two switches are joined into a single logical switch. Two Ethernet cables connect the switches together using the Mellanox ISL protocol.

    The two physical ports on each server are connecting both switches, each physical port is connected to a different switch. If a switch or cable fails, the traffic flows through the companion switch.

    High availability on the server side is provided by the new Switch Embedded Teaming (SET) facility in Windows 2016. Each Hyper-V virtual switch can be configured with SET (similar to LBFO functionality) but is compatible with RDMA and SDNv2.

  2. Robert Keith

    Hello Sandeep,

    No physical RAID hardware is compatible with Storage Spaces Direct. All disks are connected individually as SATA drives. The hyper-converged storage technology within Storage Spaces Direct combines all the individual SATA drives into a single group of disks.

    This combined group of SATA drives are then used to create s storage pool.

    The RAID resiliency is created when logical volumes (Virtual Disks) are defined within the Storage Pool. This resiliency resembles RAID 0, RAID 1, RAID 5,6 RAID 10, etc., but with many additional features not available with RAID hardware such as Dedup, compression, encryption and many other features.

    Robert

  3. Sandeep Goli

    Thanks Robert. Since most of the servers comes with single controller it is tough to put it in to HBA mode as we lose OS protection. So figuring out if i should go with a USB flash or microSD for OS but again this is not recommended in production.

  4. Robert Keith

    Hello Sandeep,
    Though operating systems are more commonly being deployed on SATA DOM drives, I would only recommend this if you have enough redundant servers to provide sufficient cover if a system fails.

    Many of the servers we recommend have a separate RAID card on the motherboard which supports 2 SATA drives in a RAID 1 Mirror configuration. These drives are hot swap available on the rear of the servers. This configuration is typically on the storage servers since these systems are larger and have more space to accommodate this.

    On other servers, we commonly use M.2 SATA drives which plug directly into the motherboard. These are not hot swappable, but servers with this configuration typically are compute servers and are stacked into a multi-node cluster. Removing a compute node which hosts VMs is a simpler task than taking down a storage node.

    Thanks for the great input and comments.

    Robert Keith

  5. Hi Robert, great write up and thanks for sharing. In this configuration, where did you install the OS?

  6. Robert Keith

    Hi Malik

    Please excuse the extremely late reply. A website bug did not notify me of this comment. Fixed now.

    Very good question. I left that part out.

    In this configuration we have an internal 200gb SATA SSD plugged directly to the motherboard. These systems have 2 SATA ports.

    We have a special cable providing both power and data and a custom disk mounting bracket.

Leave a Reply

Your email address will not be published. Required fields are marked *