Select Page

Storage Spaces Direct – Lab Environment Setup

This article discusses the lab configuration built to test the features and operation of Windows 2016 Storage Spaces Direct as a complete hyper-converged server infrastructure.

Hyper-Converged Lab with Storage Spaces Direct

Argon Systems has partnered with hardware technology companies Supermicro and Mellanox to build a reference architecture for Microsoft Windows Storage Spaces Direct. Storage Spaces Direct (S2D) has specific hardware requirements to operate and has support for new technologies including NVMe devices in a SSD form factor and TPM 2.0.

The lab created incorporates the latest network technologies from Mellanox including their 100GbE network switches and ConnectX 4 Network Cards. The server technologies from Supermicro include support for NVMe SSD drives which plug directly onto the front drive slots on the servers as well as the exact required HBA hardware and TPM 2.0 hardware modules.

Microsoft has very specific requirements for hardware compatibility with Windows Server 2016 and as of this writing these requirements have not been ratified as a certified hardware list.

Lab Overview

Hyperconvergence Lab Overview

Lab Components


SN2700 Mellanox Switch

Mellanox Network Switch

  • SN2700 36 Port 100 GbE Switch

Network Cards

  • 2 port 50Gbe Network Card per each server
  • Mellanox ConnectX 4 50GbE Network Cards – 2 x 50Gbe Ports

ServersKB-1a-3 Server Front

  • 2 x Supermicro Twin Servers
  • 2.0 TPM Module (sold separately)
  • 256GB Memory
  • Dual Intel E5-2680 CPUs

MNVe DrivesKB-1a-4 nvme ssd

  • 16 800GB NVMe Drives – 4 per Server node
  • Model:  Intel P3700 NVMe SSD 800GB

 


Hard DrivesKB-1a-5 HDD Seagate

  • 32 2TB SATA  2.5”  Hard Drives
  • Model:  2TB, 7200 RPM, SATA III, 2.5″ Hard Drive, 512E

 


Snapshots of the Actual Lab

KB-1a-6 Switch Front

The Mellanox switch connected with breakout cables.  The 100GbE ports on the switch are split into 2 50GbE connections that connect to the 50GbE NIC card ports.

KB-1a-7 Cables

Cable:  MCP7H00-xxxx cable

Network Interface Card:  ConnectX-4 MXCX416-GCAT   PCI 3 x16   2 Port 50GbE

Rear view of the Supermicro SYS-2028TP-DNCTR server cabled and powered up.

KB-1a-8 Server rear

Rear view of the Supermicro SYS-2028TP-DNCTR server cabled and powered up.

Cabling Diagram

The switch is configured as two switches to simulate high availability with a switch cluster.  As of this writing the best switch has not been released to the market.

KB-1a-9-Wiring Diagram

Was this article helpful?

6 Comments

  1. Derek

    How are the switches configured in this layout?
    Are the “2 switches” stacked?

    Reply
  2. Robert Keith

    The two switches are joined into a single logical switch. Two Ethernet cables connect the switches together using the Mellanox ISL protocol.

    The two physical ports on each server are connecting both switches, each physical port is connected to a different switch. If a switch or cable fails, the traffic flows through the companion switch.

    High availability on the server side is provided by the new Switch Embedded Teaming (SET) facility in Windows 2016. Each Hyper-V virtual switch can be configured with SET (similar to LBFO functionality) but is compatible with RDMA and SDNv2.

    Reply
  3. Sandeep Goli

    Were the SATA disks configured in any physical RAID?

    Reply
    • Robert Keith

      Hello Sandeep,

      No physical RAID hardware is compatible with Storage Spaces Direct. All disks are connected individually as SATA drives. The hyper-converged storage technology within Storage Spaces Direct combines all the individual SATA drives into a single group of disks.

      This combined group of SATA drives are then used to create s storage pool.

      The RAID resiliency is created when logical volumes (Virtual Disks) are defined within the Storage Pool. This resiliency resembles RAID 0, RAID 1, RAID 5,6 RAID 10, etc., but with many additional features not available with RAID hardware such as Dedup, compression, encryption and many other features.

      Robert

      Reply
  4. Sandeep Goli

    Thanks Robert. Since most of the servers comes with single controller it is tough to put it in to HBA mode as we lose OS protection. So figuring out if i should go with a USB flash or microSD for OS but again this is not recommended in production.

    Reply
    • Robert Keith

      Hello Sandeep,
      Though operating systems are more commonly being deployed on SATA DOM drives, I would only recommend this if you have enough redundant servers to provide sufficient cover if a system fails.

      Many of the servers we recommend have a separate RAID card on the motherboard which supports 2 SATA drives in a RAID 1 Mirror configuration. These drives are hot swap available on the rear of the servers. This configuration is typically on the storage servers since these systems are larger and have more space to accommodate this.

      On other servers, we commonly use M.2 SATA drives which plug directly into the motherboard. These are not hot swappable, but servers with this configuration typically are compute servers and are stacked into a multi-node cluster. Removing a compute node which hosts VMs is a simpler task than taking down a storage node.

      Thanks for the great input and comments.

      Robert Keith

      Reply

Submit a Comment

Your email address will not be published. Required fields are marked *