Select Page

Configuring Storage Spaces Direct – Step by Step

This article is a simplified walk through for configuring Storage Spaces Direct on a four node cluster running Windows 2016 Technical Preview 5.  TP5 has a new feature set which will build the storage pool and storage tiers automatically.

In a typical Storage Spaces deployment, the administrator will select the disks specifically to add to the pool.  In this demonstration the Enable-ClusterS2D command will scan all the nodes in the system, and load every disk into a pool with the status CanPool equal to True, then create both Mirror and Parity Storage Tiers automatically.

This was described by Claus Joergensen of Microsoft in his article Automatic Configuration in Storage Spaces Direct TP5.

Install Network Infrastructure

The hardware lab configuration is documented in the article Microsoft Storage Spaces Direct – Lab Setup.

Check that the disk drives are available

Each of the four nodes has Windows 2016 TP5 installed.  The Node names are HC-Node1, HC-Node2, HC-Node3, HC-Node4.

Each node is joined to the domain.  This lab domain is NewCo.

Note:  Log into each node as the domain administrator to execute each of the following commands.

Install Hyper-V on each node

Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node1 -IncludeManagementTools -Restart

Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node2 -IncludeManagementTools -Restart

Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node3 -IncludeManagementTools -Restart

Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node4 -IncludeManagementTools -Restart

Install the Virtual Network Adapters on each adapter on each node.  In this example only one network adapter is configured for brevity.  The network Management adapters were renamed to “Management” to make this example simple to read.

New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node1

New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node2

New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node3

New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node4

The screen below shows the output of one of these commands:

New-VMSwitch Example

Install the Virtual Network Adapters on each adapter for the Mellanox adapters.  The Mellanox adapters are 50GbE each and support RDMA over the ROCE protocol.  This greatly reduces the latency between VM traffic to Storage.

New-VMSwitch -Name StorageA -NetAdapterName StorageA  -ComputerName HC-Node1

New-VMSwitch -Name StorageA -NetAdapterName StorageA  -ComputerName HC-Node2

New-VMSwitch -Name StorageA -NetAdapterName StorageA  -ComputerName HC-Node3

New-VMSwitch -Name StorageA -NetAdapterName StorageA  -ComputerName HC-Node4

New-VMSwitch -Name StorageB -NetAdapterName StorageB  -ComputerName HC-Node1

New-VMSwitch -Name StorageB -NetAdapterName StorageB  -ComputerName HC-Node2

New-VMSwitch -Name StorageB -NetAdapterName StorageB  -ComputerName HC-Node3

New-VMSwitch -Name StorageB -NetAdapterName StorageB  -ComputerName HC-Node4

Enable RDMA on the adapters which will be used for Node to Node communications where VMs will access Storage

Invoke-Command -ComputerName HC-Node1 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageA)" }

Invoke-Command -ComputerName HC-Node2 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageA)" }

Invoke-Command -ComputerName HC-Node3 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageA)" }

Invoke-Command -ComputerName HC-Node4 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageA)" }

Invoke-Command -ComputerName HC-Node1 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageB)" }

Invoke-Command -ComputerName HC-Node2 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageB)" }

Invoke-Command -ComputerName HC-Node3 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageB)" }

Invoke-Command -ComputerName HC-Node4 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (StorageB)" }

Run Get-NetAdapterRdma to verify the configuration of RDMA on the network adapters.

Notice that all but the Management network adapters are not enabled for RDMA.

Get-NetAdapterRdma

Install Feature Prerequisites

Install File-Services and Failover-Clustering on each node.

Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node1

Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node2

Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node3

Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node4

Configure Cluster

Run the Test-Cluster to validate the hardware and software can support clustering.

Test-Cluster -Node HC-Node1,HC-Node2,HC-Node3,HC-Node4 -Include "Storage Spaces Direct",Inventory,Network,"System Configuration"

There will certainly be warnings reported after the Test-Cluster command.  Running the “Validate Cluster” from the Failover Cluster Manager GUI is a simpler way to troubleshoot cluster configurations.

Create the cluster

New-Cluster -Name HC-Cluster -Node HC-Node1,HC-Node2,HC-Node3,HC-Node4 -NoStorage -StaticAddress 172.101.4.5

Get-Cluster

The cluster is ready to configure Storage Spaces.

Configure Storage Spaces Direct

If we list the physical disks on the first node, we should see 12 disks.

KB3 - 4 Get-PhysicalDisks

When we enable Storage Spaces Direct we will create the Storage Bus configuration and each server node will see all the drives attached to each node.

Run Enable-ClusterS2D to create the Storage Spaces Direct configuration on the cluster.

Enable-ClusterS2D

Enable-ClusterS2D

As the Storage Spaces Direct is built, all the available disks on each server node will be discovered and claimed into the Storage Spaces Direct pool created.

If we do a Get-PhysicalDisks disk on Node 1, we will see all the disks in the cluster.

Get-PhysicalDisks with S2D

Now we have 49 disks in the Get-PhysicalDisks list, 48 data drives and 1 OS drive.  This is correct as the Lab has 4 nodes with 12 drives each.

List Disks by Group

If we display the Storage Pool, we see the new Pool created automatically by Enable-ClusterS2D

Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosize

Get-StoragePool from S2D

When we get the Storage Tiers created by Enable-ClusterS2D

Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosize

Get-StorageTier from S2D

We now have two Storage Tiers, one Parity and one 2-way Mirror.

The New-Volume command will

  1. Create a Virtual Disk with the Tiering and other parameters specified
  2. Partition and format the Virtual Disk
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName MultiResilient -FileSystem CSVFS_REFS -StorageTierFriendlyName Performance, Capacity -StorageTierSizes 1000GB, 9000GB

New-Volume

Notice we now have a new Virtual Disk called MultiResilient

Rename Virtual Disk Folder

Optionally you can rename the Virtual Disk folder for easier management.

The New-Volume command creates a local folder on every cluster node.

Rename Folder

Additional References

Microsoft’s Storage Spaces Overview provides an introduction to the software defined storage solution for Windows Server 2016. For an technical overview refer to Storage Spaces Direct in Windows Server 2016 Technical Preview. Technical details can be found in the article Hyper-converged solution using Storage Spaces Direct in Windows Server 2016. 

Was this article helpful?

5 Comments

  1. james martin

    New-Volume : Could not find storage tier with the friendly name ‘
    At line:1 char:1
    + New-Volume -StoragePoolFriendlyName S2D* -FriendlyName MultiResilient …
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : InvalidArgument: (:) [New-Volume], RuntimeException
    + FullyQualifiedErrorId : InvalidArgument,New-Volume

    Reply
    • Robert Keith

      Hello James,

      I cannot tell from the error message, but this looks like a typo.

      Can you make sure your Storage Pool was created and is named by the default name?

      Can you do the following PS command and check that the storage pool name starts with S2D… which is a default pool name created by Enable-ClusterS2D.

      Get-StoragePool

      Reply
  2. Server 2016 High Availability

    It is extremely serious, you are an extremely qualified writer. I have registered with your feed and you will enjoy to find your wonderful staff study-ups. By the way, we’ve shared your blog on our social networks.

    Reply
  3. Sandeep Goli

    A volume with Provisioning type Thin is not possible on S2D pool today. When i try to create a volume with Provisioning type as Thin i get error like – “this subsystem does not allow the creation of virtual disks with the specified provisioning type”. Because of this i am badly limited on space, i want to do 3-way mirroring which is limiting me on space but give best performance. if i can get a thin volume then i can manage with VHDXs dynamically expanding. Any idea if we can get some workaround for this?

    Reply
  4. ROK

    Hi Sandeep,

    You should be able to thinly provision VHDx files on S2D volumes. The S2D volume physical size or provisioning type does not limit Hyper-V storage management and thin provisioning.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *