Introduction
This how-to guide lists the steps to deploy Storage Spaces Direct (S2D) on a four-node cluster of servers. Readers are encouraged to perform and understand each step of deployment in preparation to manage and support this environment in a production environment.
This article is the first of a four part series.
- Core Cluster [this article]
- Troubleshooting Storage Clusters
- Configuring Storage Network Infrastructures
- Managing Storage Clusters
Enable-ClusterS2D
command which performs several operations with a single command. These operations include creating the Storage Spaces storage pool, enable Storage Spaces Direct, create storage tiers, etc. Please refer to other Argon Systems articles to perform these operations manually if additional customizations are desired.Preparation
A typical hardware lab configuration is documented in the Argon Systems article: Storage Spaces Direct – Lab Environment Setup.
Perform each of these steps on every cluster node.
- Install Windows Server 2016 Datacenter onto each node.
The node names in this example are: HC-Node1, HC-Node2, HC-Node3, HC-Node4. Your environment will likely follow your own IT naming standards.
- Follow the instructions in our technical article to create a Windows 2016 USB installation drive.
- Check that the physical disks are available to Windows.
If we list the physical disks on any node, we should see 12 disks such as you see in the screenshot below.
- Update the network addresses on your physical adapters.
Each server will require three network adapters:
- Management – 1 GbE physical network port
- Provider1 and Provider2 – 10, 25, 40 or 100 GbE physical port used for storage access, inter-server communications and client access
Configure IP addresses for each adapter. The IP subnets may be the same or different depending on your requirements.
- Rename the physical adapters
The 1 GbE port used is for management and Remote Desktop access.
In these examples, the adapter names are Management, Provider1 and Provider2
The PowerShell command to rename an adapter is:
1 |
- Apply the latest Windows Server hotfixes
- Join each server node into your Active Directory domain.
- Install Hyper-V role on each node.
Log into one of the server nodes as Administrator and run the following PowerShell commands. These commands will update all four servers.
1 2 3 4 | Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node1 -IncludeManagementTools -Restart Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node2 -IncludeManagementTools -Restart Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node3 -IncludeManagementTools -Restart Install-WindowsFeature –Name Hyper-V -ComputerName HC-Node4 -IncludeManagementTools -Restart |
- Configure the Virtual Network Adapters on each adapter on each node.
In this example only one network adapter is configured for brevity. The network Management adapters were renamed to Management to make this example simple to read.
1 2 3 4 | New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node1 New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node2 New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node3 New-VMSwitch -Name Management -NetAdapterName Management -AllowManagementOS $true -ComputerName HC-Node4 |
The screen below shows an example output of one of these commands:
- Install the Virtual Network Adapters on each adapter RDMA capable adapters.
1 2 3 4 5 6 7 8 | New-VMSwitch -Name Provider1 -NetAdapterName Provider1 -ComputerName HC-Node1 New-VMSwitch -Name Provider1 -NetAdapterName Provider1 -ComputerName HC-Node2 New-VMSwitch -Name Provider1 -NetAdapterName Provider1 -ComputerName HC-Node3 New-VMSwitch -Name Provider1 -NetAdapterName Provider1 -ComputerName HC-Node4 New-VMSwitch -Name Provider2 -NetAdapterName Provider2 -ComputerName HC-Node1 New-VMSwitch -Name Provider2 -NetAdapterName Provider2 -ComputerName HC-Node2 New-VMSwitch -Name Provider2 -NetAdapterName Provider2 -ComputerName HC-Node3 New-VMSwitch -Name Provider2 -NetAdapterName Provider2 -ComputerName HC-Node4 |
- Enable RDMA on the adapters which will be used for Node to Node communications where VMs will access Storage
1 2 3 4 5 6 7 8 | Invoke-Command -ComputerName HC-Node1 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider1)" } Invoke-Command -ComputerName HC-Node2 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider1)" } Invoke-Command -ComputerName HC-Node3 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider1)" } Invoke-Command -ComputerName HC-Node4 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider1)" } Invoke-Command -ComputerName HC-Node1 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider2)" } Invoke-Command -ComputerName HC-Node2 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider2)" } Invoke-Command -ComputerName HC-Node3 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider2)" } Invoke-Command -ComputerName HC-Node4 -ScriptBlock { Enable-NetAdapterRdma "vEthernet (Provider2)" } |
- Run
Get-NetAdapterRdma
to verify the configuration of RDMA on the network adapters.
Notice that all but the Management network adapters are not enabled for RDMA.
File Services and Failover Clustering
- Install
File-Services
andFailover-Clustering
on each node.
1 2 3 4 | Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node1 Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node2 Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node3 Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName HC-Node4 |
Configure Cluster
- Run
Test-Cluster
to validate the hardware and software can support clustering.
1 | Test-Cluster -Node HC-Node1,HC-Node2,HC-Node3,HC-Node4 -Include "Storage Spaces Direct",Inventory,Network,"System Configuration" |
There will certainly be warnings reported after the Test-Cluster
command. Running Validate Cluster from the Failover Cluster Manager GUI is a simpler way to troubleshoot cluster configurations.
Create the Cluster
1 | New-Cluster -Name HC-Cluster -Node HC-Node1,HC-Node2,HC-Node3,HC-Node4 -NoStorage -StaticAddress 172.101.4.5 |
The cluster is ready to configure Storage Spaces Direct.
Configure Storage Spaces Direct
When we enable Storage Spaces Direct we will create the Storage Bus configuration and each server node will see all the drives attached to each node.
- Run
Enable-ClusterS2D
to create the Storage Spaces Direct configuration on the cluster.
As the Storage Spaces Direct is built, all the available disks on each server node will be discovered and claimed into the Storage Spaces Direct pool created.
If we run Get-PhysicalDisks
disk on Node 1, we will see all the disks in the cluster.
Now we have 49 disks in the Get-PhysicalDisks
list, 48 data drives and one OS drive. This is correct as the lab has 4 nodes with 12 drives each.
If we display the Storage Pool, we see the new pool created automatically by Enable-ClusterS2D
1 | Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosize |
When we get the Storage Tiers created by Enable-ClusterS2D
1 | Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosize |
We now have two Storage Tiers, one Parity and one two-way Mirror.
The New-Volume
command will
- Create a Virtual Disk with the Tiering and other parameters specified
- Partition and format the Virtual Disk
Notice we now have a new Virtual Disk called MultiResilient
Rename Virtual Disk Folder
Optionally you can rename the Virtual Disk folder for easier management.
The New-Volume command creates a local folder on every cluster node. Renaming this folder on any storage node presents the new folder name on every node.
Each folder name can be renamed to the volume name and can be easily identified upon usage. Volumes are created and tuned for functions such as DB Servers, bulk file storage, etc.
Next Steps
Before configuring storage networking, test this environment for performance and errors. VMs can be created for this purpose as well as generating network traffic and monitoring every network interface.
Once this environment runs without errors and is stable, move on to the the third article Configuring Storage Spaces Direct Step by Step: Part 3 Network Infrastructure.
A very nice blog, I like the way you share very honestly. I learned a lot of things.
Hi, in my opinion you did not understand Storage Spaces Direct. Storage Spaces Direct has nothing to do with Hyper-V. Why is the Hyper-V Role installed?
Storage Spaces Direct is a Hyper-Converged technology.
This combines Virtual Machines and Virtual Storage combined in a cluster of generic servers.
Running Virtual Machines do require Hyper-V
Awesome blog. I enjoyed reading your articles. This is truly a great read for me. I have bookmarked it and I am looking forward to reading new articles. Keep up the good work!
how to enable cluster s2d manually?
ediey, “manually”? Did you follow the guide?
yes manually. ” Please refer to other Argon Systems articles to perform these operations manually if additional customizations are desired”. because i want make 2 storage pool. any idea? thanks in advance.
Hi Ediey,
Do look at article:
https://argonsys.com/microsoft-cloud/articles/create-storage-pool-storage-spaces-standard/
Storage Pools are collections of disks in a pool. The default commands will grab every disk available.
To create more than one Storage Pool, you will need to select disks specifically and add them to each Storage Pool as appropriate.
Hi , can i team the nics and create a virtual switch and assign it to VMs for the storage spaces direct network ? Would that be an option . I have 2 Mellanox 25G cards in each server and 16 servers
Awesome explanation, I was able to follow it being a total nubie to S2D and clusters. It took several attempts as I got errors that were not expected or explained but eventually I got it working as I was creating this articles server environment entirely using VMs in Hyper-V so so cache was not possible (or easy to apply) as you cant mark any virtual hard drive as an SSD so it gets selected as a cache disk during S2D installation but the rest works fine. Good thing with Hyper-V is checkpoints – create b4 each step and if u stuff it up, just go back to the checkpoints for each server node, great time saver then once the process is perfected I can apply to real world hardware with same results or even clone each VM to a physical server (not recommended but works in a pinch).
Its great to see people who post simple step by step steps to do stuff with some explanation so others can learn how to do this and learn as they go.
I am sure there’s so much more to be discovered but this gave me a great start in my VM environment in order to learn how these things work so I can take it forward and develop my failover cluster for my business and my hosting clients (email, cloud, website etc.).