Last year Argon Systems created a very popular article Configuring Storage Spaces Direct – Step by Step. That article described the process of building an Storage Spaces Direct hyper-converged cluster on actual hardware servers.
We recently published a related article How to Build Windows Storage Spaces Direct on a Virtual Lab. That was part one. It documented how to create a virtual lab for Storage Spaces Direct using a common Windows PC. This article below is part two, and covers how to configure the virtual lab.
Virtual Lab Overview
- 1 Virtual Lab Overview
- 2 Configure Storage Server Prerequisites
- 3 Configure Advanced Network Parameters on Provider Network
- 4 Configure Storage Spaces Direct
- 5 Create Storage Volumes
- 6 Rename Volume Folders
- 7 Results
- 8 Summary
This virtual environment differs from a physical environment in a few points:
- RDMA support requires special network hardware which is not available in virtual environments
- The virtual SCSI disks on the virtual storage servers require additional processing to change the media type to SSD and/or HDD
- Naturally performance will suffer, so virtual environments are not appropriate for actual Proof-of-Concept projects
The virtual lab should have four storage server VMs available and running. The configuration commands will refer to the lab environment pictured below.
The following screenshot should resemble your environment and include Active Directory and four storage server VMs. Admin1 and Win2016-Core-Template are not used in this article.
Configure Storage Server Prerequisites
In these examples, the storage servers are named Storage1 through Storage4. Your environment may vary. The virtual lab VMs should have three network adapters:
If these are not configured, create the network adapters and configure IP addresses for each adapter. The Provider1 and Provider2 adapters may be on the same IP subnet and must share the same Hyper-V VM Switch. The Management adapter should be configured on a different subnet.
The PowerShell command to rename a virtual adapter is:
Rename-NewAdpater –Name “Ethernet 3” –NewName “Management”
Set the IP addresses for the Management and Provider1, Provider2 adapters
Run SCONFIG command to update the IP configurations
The Management adapter parameters:
- IPAddress – Should be on same subnet as your PC lab environment (may be DHCP supplied)
- Default GW – Should be your external ISP router with Internet access
- DNS Server Address – Should be the AD1 VM – the Management IP address
Test network connectivity
Before continuing, test that each of the storage server network adapters can communicate with their peers
Ping each server Management IP address Ping each server Provider1 and Provider2 IP addresses Ping each server by DNS name Ping AD1 by DNS name. Ping an Internet service to verify connectivity
Join the servers to the Active Directory domain.
The AD1 VM is the domain controller for this article and the domain name used is Contoso.local.
- Log in to each server VM from Hyper-V Connect
- Run SCONFIG command
- Rename the server (if required)
- Join each server to the Active Directory domain
Install Hyper-V role
Install the Hyper-V role for each storage server. This will require a reboot for each server. Run the following commands from any domain joined computer (such as AD1 since the storage servers will reboot during these commands):
Install-WindowsFeature –Name Hyper-V -ComputerName Storage1 -IncludeManagementTools -Restart Install-WindowsFeature –Name Hyper-V -ComputerName Storage2 -IncludeManagementTools -Restart Install-WindowsFeature –Name Hyper-V -ComputerName Storage3 -IncludeManagementTools -Restart Install-WindowsFeature –Name Hyper-V -ComputerName Storage4 -IncludeManagementTools -Restart
Install File Services and Failover Cluster roles
Install the Hyper-V role for each storage server. Run the following commands from any domain joined computer:
Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName Storage1 Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName Storage2 Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName Storage3 Install-WindowsFeature -Name File-Services, Failover-Clustering -IncludeManagementTools -ComputerName Storage4
Create the Cluster
Test-Cluster to validate the hardware the environment software can support clustering. Run this command from one of the storage VM servers with Failover-Cluster role installed.
Test-Cluster -Node Storage1, Storage2, Storage3, Storage4 -Include Inventory,Network,"System Configuration"
There will certainly be warnings reported after the
Test-Cluster command. Running the “Validate Cluster” from the Failover Cluster Manager GUI is a simpler way to troubleshoot cluster configurations.
If the cluster test completes successfully, create the new cluster:
New-Cluster -Name HC-Cluster -Node Storage1, Storage2, Storage3, Storage4 -NoStorage -StaticAddress 10.0.0.101
Get-Cluster command will show the new hyper-converged cluster is active.
Quickly validate the cluster health. Run the PowerShell commands
The cluster network interfaces are up and healthy. If we query for the cluster storage configuration:
This is expected since we set the
-NoStorage flag when we created the cluster. If we now display the list of physical disks on one of the storage servers, we see more than the original six virtual SCSI disks.
The cluster has now collected the disk drives on all four storage servers. Each server now displays what looks like 24 local disk drives.
Configure Advanced Network Parameters on Provider Network
This section configures the more advanced network parameters for the Provider network including:
- Network QoS
- Data Center Bridging (DCB)
Since the Hyper-V virtual switches and network adapters do not support DCB or QoS, the following commands have no effect. They are included here to avoid confusion.
New-NetQosPolicy "SMB" –NetDirectPortMatchCondition 445 –PriorityValue8021Action 3 Install-WindowsFeature "data-center-bridging" Enable-NetQosFlowControl –Priority 3 Disable-NetQosFlowControl –Priority 0,1,2,4,5,6,7 Enable-NetAdapterQos –InterfaceAlias "Provider1", "Provider2” New-NetQosTrafficClass "SMB" –Priority 3 –BandwidthPercentage 80 –Algorithm ETS
Configure Storage Spaces Direct
On more simple environments, simply running the
- Enable Storage Spaces Direct services
- Create the Software Storage Bus
- Create a Storage Spaces Pool
- Scan the cluster for all eligible disks for pooling
- Add these disks to the pool
- Create storage tiers Capacity and Performance
In this example, we are going to perform each step manually. This will remove some of the mystery.
Check for any existing pool data.
Since this is a virtual lab, it is less likely that any legacy Storage Spaces data exists, but if any disk has been used in a pool, this should be removed.
Run the PowerShell command
Primordial pools can be safely ignored. In this instance, one pool is a default containing all the physical disk drives other than the OS disk. The other pool only contains the OS disk. When the storage pool is created later in this chapter, and all of the unallocated disks are added to the new pool, these disk will still reside in the primordial pool as well.
Just to prove the last statements, above we show the contents of both primordial pools. It is important to understand how Storage Spaces operates.
If data remained from prior lab experiments, this would have shown up as additional storage pools.
In the case where there is old Storage Pools displayed, follow the instructions in this KB article, Clearing Disks on Microsoft Storage Spaces Direct.
Enable Storage Spaces Direct
Run the following PowerShell command on any of the storage cluster nodes:
Enable-ClusterS2D -CacheState Disabled -AutoConfig:0 -SkipEligibilityChecks -Confirm:$false
This command will enable the Storage Spaces Direct services, query every node, query every eligible disk, and run a battery of tests to guarantee the storage infrastructure will provide reliable storage services. No storage configurations have been performed other than enabling Storage Spaces Direct software services.
Create Storage Pool and Add Disks
In this step, we will manually create a storage pool and add disks.
This section may seem more complex than necessary. The explanation is this. When creating virtual storage servers, the MediaType parameter on the disk drives are set to Unspecified. When displaying the list of physical disks with the command:
Get-PhysicalDisk | ? CanPool -eq True | ft FriendlyName, CanPool, MediaType, PhysicalLocation
We see that the FriendlyName and MediaType are not useful and can be changed.
Run the following PowerShell commands:
$Disks = Get-PhysicalDisk | ? PhysicalLocation -like "*LUN 3" New-StoragePool -StorageSubSystemFriendlyName *Cluster* -FriendlyName S2DPool -ProvisioningTypeDefault Fixed -PhysicalDisk $Disks
This command has created a pool with the name S2DPool, and added all the disks attached to the virtual SCSI LUN address of 3.
The above command shows the new Storage Spaces pool S2DPool. Piping this pool into the
Get-PhysicalDisk command shows there are now 4 disks in the S2DPool.
Just to sanity check this, run the following command.
Get-StoragePool -FriendlyName S2DPool | Get-PhysicalDisk | ft PhysicalLocation
Adding the disks from LUN addresses 4,5,6.
$Disks = Get-PhysicalDisk | ? PhysicalLocation -like "*LUN 4" Add-PhysicalDisk -PhysicalDisks $Disks -StoragePoolFriendlyName S2DPool $Disks = Get-PhysicalDisk | ? PhysicalLocation -like "*LUN 5" Add-PhysicalDisk -PhysicalDisks $Disks -StoragePoolFriendlyName S2DPool $Disks = Get-PhysicalDisk | ? PhysicalLocation -like "*LUN 6" Add-PhysicalDisk -PhysicalDisks $Disks -StoragePoolFriendlyName S2DPool
Sanity check again, there should be 16 drives in the S2DPool.
Get-StoragePool -FriendlyName S2DPool | Get-PhysicalDisk | sort PhysicalLocation | ft PhysicalLocation
Next we will set the MediaType of the 16 drives in the pool to HDD.
Get-StoragePool -FriendlyName S2DPool | Get-PhysicalDisk | Set-PhysicalDisk -MediaType HDD
The S2DPool pool now has 16 HDD drives and 8 Unspecified.
Now add the Unspecified disks to the pool:
$Disks = Get-PhysicalDisk | ? PhysicalLocation -like "*LUN 1" Add-PhysicalDisk -PhysicalDisks $Disks -StoragePoolFriendlyName S2DPool $Disks = Get-PhysicalDisk | ? PhysicalLocation -like "*LUN 2" Add-PhysicalDisk -PhysicalDisks $Disks -StoragePoolFriendlyName S2DPool And set these disk types to SSD disk drives. Get-StoragePool -FriendlyName s2dpool | Get-PhysicalDisk | ? MediaType -eq "Unspecified" | Set-PhysicalDisk -MediaType SSD
Displaying the disks within the new S2DPool we now see:
Get-StoragePool -FriendlyName S2DPool | Get-PhysicalDisk | ft FriendlyName, CanPool, MediaType, PhysicalLocation
The S2DPool now contains two SSD disks and four HDD disks on each storage server.
Enable Storage Spaces Direct
Complete the Storage Spaces Direct setup.
Set-ClusterS2D -CacheState Enabled -Verbose Update-StorageProviderCache -DiscoveryLevel Full
Storage Spaces Direct is configured and ready to create storage volumes.
Create Storage Volumes
The storage cluster is now configured as a four-node cluster. This environment will support all the Storage Spaces resiliency settings for Virtual Disk volumes.
Resiliency can be configured as:
- Mirrored – Data is synchronously replicated across two or three nodes. This resiliency has advantages of speed since reads are distributed across multiple nodes.
- Parity – Data is striped across multiple disks. Parity stores data more capacity efficiently since mirroring stores multiple copies of data where parity only stores parity data. Parity has reduced read performance since data is not distributed to multiple nodes and disks.
- Tiered – Volumes can be created using both Mirrored and Parity resiliency tiers often providing the speed advantage of mirrored volumes and the capacity advantage of parity.
Create Mirrored Volumes
The following commands will create both two-way and three-way mirrored volumes.
New-Volume -FriendlyName "Mirror-2-Vol1" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2DPool -Size 1GB -ResiliencySettingName Mirror New-Volume -FriendlyName "Mirror-3-Vol1" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2DPool -Size 1GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 3
Create Parity Volumes
The following commands will create both Single Parity and Dual Parity volumes.
New-Volume -FriendlyName "Parity-1-Vol1" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2DPool -Size 1GB -ResiliencySettingName Parity New-Volume -FriendlyName "Parity-2-Vol1" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2DPool -Size 1GB -ResiliencySettingName Parity -PhysicalDiskRedundancy 2
We can display the new volumes (Virtual Disks).
Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, PhysicalDiskRedundancy
Get-Volume | ? FileSystem -eq CSVFS
Create Tiered Volume
The following commands will create a tiered Mirror and Parity volume.
New-StorageTier -MediaType HDD -StoragePoolFriendlyName S2DPool -FriendlyName HDD_Tier New-StorageTier -MediaType SSD -StoragePoolFriendlyName S2DPool -FriendlyName SSD_Tier Get-StorageTier | Select FriendlyName, ResiliencySettingName, PhysicalDiskRedundancy
New-Volume -FriendlyName "TieredVol1" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2DPool -StorageTierFriendlyNames “SSD_Tier”, “HDD_Tier” -StorageTierSizes 1GB, 5GB
There are now five volumes with each of the major resiliency settings.
Get-VirtualDisk | sort FriendlyName
Rename Volume Folders
Each virtual disk created is referenced by a folder on each of the storage cluster nodes located at C:\ClusterStorage with a generic folder name.
Optionally you should consider renaming the folder to the actual volume name.
Access to the volumes using SMB are referenced by the physical path. When configuring Hyper-V to access storage volumes, the physical path C:\ClusterStorage\Mirror-2-Vol1 is easier to manage and less error-prone than referencing the path C:\ClusterStorage\Volume1\disk.vhdx.
The lab configuration now consists of a four-node storage cluster with multiple storage volumes. Now, when we look at this environment from the Failover Cluster Manager, we can get a better sense of this environment.
The Disks panel shows:
- Five different configurations of Cluster Shared Volumes
- The resiliency of each volume. Mirror-2-Vol1 has eight columns
- The file system overhead on the 8GB volume of about 730MB
- Each volume is managed by a specific node. Windows clustering load balances storage access automatically or can be managed by cluster policies.
The Nodes panel shows the status and details of every cluster node.
Network connections and health status for each node.
And the storage volumes and which cluster node is actively hosting each volume.
Once the storage cluster is built, Windows Server Manager has become a powerful tool to manage Storage Spaces.
Server Manager provides a single pane of glass to:
- Monitor the health of Storage Pools, Volumes and individual disks
- Monitor and manage physical disks
- Manage storage capacity for Cluster Shared Volumes as well as VHD volumes
- Manage storage performance
- Deploy additional storage volumes and network shares
The Virtual Lab created is now a base infrastructure to build into a complete hybrid cloud environment. In this KB article, we configured a relatively raw virtual lab into a robust storage environment.
Going further, the projects possible using this lab environment include:
- Deploying Virtual Machines running on this hyper-converged infrastructure
- Test storage features including Dedup, storage replication and numerous other volume configurations
- Test the various storage failure scenarios including failing disks, crashing servers, swapping disk locations
- Adding and removing storage cluster nodes
- Configuring Scale-out File Services
- Configuring NFS and iSCSI over Storage Spaces Clusters
- Testing Cluster Shared Volumes (CSV) access across multiple clients
- Configuring hybrid cloud scenarios between your lab and Azure
- And… Practically any private and hybrid cloud configuration
The possibilities are practically endless. Containers, Windows NANO, Discovery State Configuration development, DevOps, …
Please do provide feedback on errors or any other comments below.