Storage Spaces Direct Step by Step: Part 1 Core Cluster

How to deploy S2D on a 4-node cluster. Perform each step in preparation to manage and support the core cluster in a production environment. Part 1 of 4.

Introduction

This how-to guide lists the steps to deploy Spaces Direct (S2D) on a four-node of servers. Readers are encouraged to perform and understand each step of deployment in preparation to manage and support this environment in a production environment.

This article is the first of a four-part series.

  1. Core [this article]
  2. Troubleshooting Storage Clusters
  3. Configuring Storage Network Infrastructures
  4. Managing Clusters

Info

This guide will use the Enable-ClusterS2D command which performs several operations with a single command. These operations include creating the Spaces storage pool, enable Storage Spaces Direct, creating storage tiers, etc. Please refer to other Argon Systems articles to perform these operations manually if additional customizations are desired.

Preparation

A typical hardware lab configuration is documented in the Argon Systems article: Storage Spaces Direct – Lab Environment Setup.

Perform each of these steps on every node.

  1. Install Windows Datacenter onto each node.

The node names in this example are HC-Node1, HC-Node2, HC-Node3, HC-Node4. Your environment will likely follow your own IT naming standards.

  1. Follow the instructions in our technical article to create a Windows 2016 USB installation drive.
  2. Check that the physical disks are available to Windows.

If we list the physical disks on any node, we should see 12 disks as you see in the screenshot below.

Get-PhysicalDisks
  1. Update the network addresses on your physical adapters.

Each server will require three network adapters:

  • Management – 1 GbE physical network port
  • Provider1 and Provider2 – 10, 25, 40, or 100 GbE physical port used for storage access, inter-server communications, and client access

Configure IP addresses for each adapter. The IP subnets may be the same or different depending on your requirements.

  1. Rename the physical adapters

The 1 GbE port used is for management and Remote Desktop access.

Info

This port is not the out-of-band IPMI port, but one of the 1 GbE ports selected for this purpose.

In these examples, the adapter names are Management, Provider1 and Provider2

The PowerShell command to rename an adapter is:

Rename-NewAdpater –Name “Ethernet 3” –NewName “Management”
  1. Apply the latest Windows Server hotfixes

Info

This step is important. Make sure you apply all the latest updates.

  1. Join each server node into your domain.
  2. Install role on each node.

Log into one of the server nodes as Administrator and run the following PowerShell commands. These commands will update all four servers.

Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node1 -IncludeManagementTools -Restart
Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node2 -IncludeManagementTools -Restart
Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node3 -IncludeManagementTools -Restart
Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node4 -IncludeManagementTools -Restart
  1. Configure the Virtual Network Adapters on each adapter on each node.

In this example, only one network adapter is configured for brevity. The network Management adapters were renamed to Management to make this example simple to read.

New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node1
New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node2
New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node3
New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node4

The screen below shows an example output of one of these commands:

New-VMSwitch Example
  1. Install the Virtual Network Adapters on each adapter RDMA capable adapters.
New-VMSwitch -Name Provider1 -NetAdapterName Provider1 -ComputerName HC-Node1
New-VMSwitch -Name Provider1 -NetAdapterName Provider1 -ComputerName HC-Node2
New-VMSwitch -Name Provider1 -NetAdapterName Provider1 -ComputerName HC-Node3
New-VMSwitch -Name Provider1 -NetAdapterName Provider1 -ComputerName HC-Node4
New-VMSwitch -Name Provider2 -NetAdapterName Provider2 -ComputerName HC-Node1
New-VMSwitch -Name Provider2 -NetAdapterName Provider2 -ComputerName HC-Node2
New-VMSwitch -Name Provider2 -NetAdapterName Provider2 -ComputerName HC-Node3
New-VMSwitch -Name Provider2 -NetAdapterName Provider2 -ComputerName HC-Node4
  1. Enable RDMA on the adapters which will be used for Node to Node communications where VMs will access Storage
Invoke-Command -ComputerName HC-Node1 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider1)" }
Invoke-Command -ComputerName HC-Node2 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider1)" }
Invoke-Command -ComputerName HC-Node3 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider1)" }
Invoke-Command -ComputerName HC-Node4 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider1)" }
Invoke-Command -ComputerName HC-Node1 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider2)" }
Invoke-Command -ComputerName HC-Node2 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider2)" }
Invoke-Command -ComputerName HC-Node3 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider2)" }
Invoke-Command -ComputerName HC-Node4 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider2)" }
  1. Run Get-NetAdapterRdma to verify the configuration of RDMA on the network adapters.

Notice that all but the Management network adapters are not enabled for RDMA.

Get-NetAdapterRdma

File Services and Failover Clustering

  1. Install File-Services and Failover-Clustering on each node.
Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node1
Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node2
Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node3
Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node4

Configure Cluster

  1. Run Test-Cluster to validate that the hardware and software can support clustering.
Test-Cluster -Node HC-Node1,HC-Node2,HC-Node3,HC-Node4 -Include "Storage Spaces Direct",Inventory,Network,"System Configuration"

There will certainly be warnings reported after the Test-Cluster command. Running Validate Cluster from the Failover Cluster Manager GUI is a simpler way to cluster configurations.

Create the Cluster

New-Cluster -Name HC-Cluster -Node HC-Node1,HC-Node2,HC-Node3,HC-Node4 -NoStorage -StaticAddress 172.101.4.5
Get-Cluster

The cluster is ready to configure Storage Spaces Direct.

Configure Storage Spaces Direct

When we enable Storage Spaces Direct we will create the Storage Bus configuration and each server node will see all the drives attached to each node.

  1. Run Enable-ClusterS2D to create the Storage Spaces Direct configuration on the cluster.
Enable-ClusterS2D

As the Storage Spaces Direct is built, all the available disks on each server node will be discovered and claimed into the Storage Spaces Direct pool created.

If we run Get-PhysicalDisks disk on Node 1, we will see all the disks in the cluster.

Get-PhysicalDisks with S2D

Now we have 49 disks in the Get-PhysicalDisks list, 48 data drives, and one OS drive. This is correct as the lab has 4 nodes with 12 drives each.

List Disks by Group

If we display the Storage Pool, we see the new pool created automatically by Enable-ClusterS2D

Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosize
Get-StoragePool from S2D

When we get the Storage Tiers created by Enable-ClusterS2D

Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosize
Get-StorageTier from S2D

We now have two Storage Tiers, one Parity and one two-way Mirror.

The New-Volume command will

  • Create a Virtual Disk with the Tiering and other parameters specified
  • Partition and format the Virtual Disk
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName MultiResilient -FileSystem CSVFS_REFS -StorageTierFriendlyName Performance, Capacity -StorageTierSizes 1000GB, 9000GB
New-Volume

Notice we now have a new Virtual Disk called MultiResilient

Rename Virtual Disk Folder

Optionally you can rename the Virtual Disk folder for easier management.

The New-Volume command creates a local folder on every cluster node. Renaming this folder on any storage node presents the new folder name on every node.

Rename Folder

Each folder name can be renamed to the volume name and can be easily identified upon usage.  Volumes are created and tuned for functions such as DB Servers, bulk file storage, etc.

Next Steps

Before configuring storage networking, test this environment for performance and errors.  VMs can be created for this purpose as well as generating network traffic and monitoring every network interface.

Once this environment runs without errors and is stable, move on to the third article Configuring Storage Spaces Direct Step by Step: Part 3 Network Infrastructure.

11 thoughts on “Storage Spaces Direct Step by Step: Part 1 Core Cluster”

  1. Hi, in my opinion you did not understand Storage Spaces Direct. Storage Spaces Direct has nothing to do with Hyper-V. Why is the Hyper-V Role installed?

  2. 360digitmgas

    Awesome blog. I enjoyed reading your articles. This is truly a great read for me. I have bookmarked it and I am looking forward to reading new articles. Keep up the good work!

  3. yes manually. ” Please refer to other Argon Systems articles to perform these operations manually if additional customizations are desired”. because i want make 2 storage pool. any idea? thanks in advance.

  4. Robert Keith

    Storage Spaces Direct is a Hyper-Converged technology.

    This combines Virtual Machines and Virtual Storage combined in a cluster of generic servers.

    Running Virtual Machines do require Hyper-V

  5. Hi , can i team the nics and create a virtual switch and assign it to VMs for the storage spaces direct network ? Would that be an option . I have 2 Mellanox 25G cards in each server and 16 servers

  6. David Crawford

    Awesome explanation, I was able to follow it being a total nubie to S2D and clusters. It took several attempts as I got errors that were not expected or explained but eventually I got it working as I was creating this articles server environment entirely using VMs in Hyper-V so so cache was not possible (or easy to apply) as you cant mark any virtual hard drive as an SSD so it gets selected as a cache disk during S2D installation but the rest works fine. Good thing with Hyper-V is checkpoints – create b4 each step and if u stuff it up, just go back to the checkpoints for each server node, great time saver then once the process is perfected I can apply to real world hardware with same results or even clone each VM to a physical server (not recommended but works in a pinch).

    Its great to see people who post simple step by step steps to do stuff with some explanation so others can learn how to do this and learn as they go.

    I am sure there’s so much more to be discovered but this gave me a great start in my VM environment in order to learn how these things work so I can take it forward and develop my failover cluster for my business and my hosting clients (email, cloud, website etc.).

Leave a Reply

Your email address will not be published. Required fields are marked *