Select Page

Configuring Storage Spaces Direct Step by Step: Part 1 Core Cluster

Introduction

This deployment guide is a replacement to our very popular article that was based on Windows 2016 Technical Preview 5. We’ve updated the content for Windows Server 2016 and greatly expanded the technical content. Here again we list the steps to deploy Storage Spaces Direct (S2D) on a four-node cluster of servers. Users are encouraged to perform and understand each step of deployment in preparation to manage and support this environment in a production environment.

This article is the first of a four part series.

  1. Core Cluster (this article)
  2. Deployment
  3. Management
  4. Troubleshooting
Note: This guide will use the Enable-ClusterS2D command which performs several operations with a single command. These operations include creating the Storage Spaces storage pool, enable Storage Spaces Direct, create storage tiers, etc. Please refer to other Argon Systems articles to perform these operations manually if additional customizations are desired.

Prepare Your Environment

A typical hardware lab configuration is documented in the Argon Systems article: Storage Spaces Direct – Lab Environment Setup.

Check that the Disk Drives are Available

  1. Install Windows Server 2016 Datacenter onto each node.

The node names in this example are: HC-Node1, HC-Node2, HC-Node3, HC-Node4. Your environment will likely follow your own IT naming standards.

  1. Update the network addresses on your physical adapters.

Each server will require three network adapters:

  • Management – 1 GbE physical network port
  • Provider1 – 10, 25, 40 or 100 GbE physical port
  • Provider2 – used for storage access, inter-server communications and client access

Configure IP addresses for each adapter. The IP subnets may be the same or different depending on your requirements.

  1. Rename the physical adapters

The 1 GbE port used is for management and Remote Desktop access.

Note: This port is not the out-of-band IPMI port, but one of the 1 GbE ports selected for this purpose.

In these examples, the adapter names are Management, Provider1 and Provider2

The PowerShell command to rename an adapter is:

Rename-NewAdpater –Name “Ethernet 3” –NewName “Management”
  1. Apply the latest Windows Server hotfixes
  2. Join each server node into your Active Directory domain.

NewCo and NewCo.local are used as example domain names.

  1. Install Hyper-V role on each node.

Log into one of the server nodes as Administrator and run the following PowerShell commands. These commands will update all four servers.

Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node1 -IncludeManagementTools -Restart
Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node2 -IncludeManagementTools -Restart
Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node3 -IncludeManagementTools -Restart
Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node4 -IncludeManagementTools -Restart
  1. Configure the Virtual Network Adapters on each adapter on each node.

In this example only one network adapter is configured for brevity. The network Management adapters were renamed to Management to make this example simple to read.

New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node1
New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node2
New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node3
New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node4

The screen below shows an example output of one of these commands:

New-VMSwitch Example

  1. Install the Virtual Network Adapters on each adapter RDMA capable adapters.
New-VMSwitch -Name StorageA -NetAdapterName StorageA -ComputerName HC-Node1
New-VMSwitch -Name StorageA -NetAdapterName StorageA -ComputerName HC-Node2
New-VMSwitch -Name StorageA -NetAdapterName StorageA -ComputerName HC-Node3
New-VMSwitch -Name StorageA -NetAdapterName StorageA -ComputerName HC-Node4
New-VMSwitch -Name StorageB -NetAdapterName StorageB -ComputerName HC-Node1
New-VMSwitch -Name StorageB -NetAdapterName StorageB -ComputerName HC-Node2
New-VMSwitch -Name StorageB -NetAdapterName StorageB -ComputerName HC-Node3
New-VMSwitch -Name StorageB -NetAdapterName StorageB -ComputerName HC-Node4
  1. Enable RDMA on the adapters which will be used for Node to Node communications where VMs will access Storage
Invoke-Command -ComputerName HC-Node1 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageA)" }
Invoke-Command -ComputerName HC-Node2 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageA)" }
Invoke-Command -ComputerName HC-Node3 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageA)" }
Invoke-Command -ComputerName HC-Node4 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageA)" }
Invoke-Command -ComputerName HC-Node1 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageB)" }
Invoke-Command -ComputerName HC-Node2 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageB)" }
Invoke-Command -ComputerName HC-Node3 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageB)" }
Invoke-Command -ComputerName HC-Node4 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageB)" }
  1. Run Get-NetAdapterRdma to verify the configuration of RDMA on the network adapters.

Notice that all but the Management network adapters are not enabled for RDMA.

Get-NetAdapterRdma

Install Feature Prerequisites

  1. Install File-Services and Failover-Clustering on each node.
Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node1
Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node2
Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node3
Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node4

Configure Cluster

  1. Run Test-Cluster to validate the hardware and software can support clustering.
Test-Cluster -Node HC-Node1,HC-Node2,HC-Node3,HC-Node4 -Include "Storage Spaces Direct",Inventory,Network,"System Configuration"

There will certainly be warnings reported after the Test-Cluster command. Running the Validate Cluster from the Failover Cluster Manager GUI is a simpler way to troubleshoot cluster configurations.

Create the Cluster

New-Cluster -Name HC-Cluster -Node HC-Node1,HC-Node2,HC-Node3,HC-Node4 -NoStorage -StaticAddress 172.101.4.5

Get-Cluster

The cluster is ready to configure Storage Spaces Direct.

Configure Storage Spaces Direct

If we list the physical disks on the first node, we should see 12 disks.

KB3 - 4 Get-PhysicalDisks

When we enable Storage Spaces Direct we will create the Storage Bus configuration and each server node will see all the drives attached to each node.

  1. Run Enable-ClusterS2D to create the Storage Spaces Direct configuration on the cluster.

Enable-ClusterS2D

As the Storage Spaces Direct is built, all the available disks on each server node will be discovered and claimed into the Storage Spaces Direct pool created.

If we run Get-PhysicalDisks disk on Node 1, we will see all the disks in the cluster.

Get-PhysicalDisks with S2D

Now we have 49 disks in the Get-PhysicalDisks list, 48 data drives and one OS drive. This is correct as the lab has 4 nodes with 12 drives each.

List Disks by Group

If we display the Storage Pool, we see the new pool created automatically by Enable-ClusterS2D

Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosize

Get-StoragePool from S2D

When we get the Storage Tiers created by Enable-ClusterS2D

Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosize

Get-StorageTier from S2D

We now have two Storage Tiers, one Parity and one two-way Mirror.

The New-Volume command will

  • Create a Virtual Disk with the Tiering and other parameters specified
  • Partition and format the Virtual Disk
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName MultiResilient -FileSystem CSVFS_REFS -StorageTierFriendlyName Performance, Capacity -StorageTierSizes 1000GB, 9000GB

New-Volume

Notice we now have a new Virtual Disk called MultiResilient

Rename Virtual Disk Folder

Optionally you can rename the Virtual Disk folder for easier management.

The New-Volume command creates a local folder on every cluster node.

Rename Folder

Next Steps: Configuring Storage Spaces Direct – Step by Step: Part 2 Deployment

Was this article helpful?

Submit a Comment

Your email address will not be published. Required fields are marked *