Contents
This article provides instructions to clear out old storage pools, virtual disks and meta data left behind on hard drives on Storage Spaces Direct infrastructures.
Introduction
When inserting disk drives into servers, and the disk was pulled from a server which had been configured with a Storage Spaces Storage Pool, the drive will have remnants of the prior Storage Pool. When displaying existing storage pools, the storage pool from the prior server will show up as an “Unhealthy” storage pool and the disk will display as “not poolable”. Disk drives will have any of several error states in the OperationalStatus or the HealthStatus.
The best practice when decommissioning a Storage Spaces system is to remove the Virtual Disks, then the Storage Pool. This will release the disk drives and remove the Storage Pool metadata. When removing single disks and leaving the Storage Pool intact, you will simply retire the disk. This best practice is not always possible, especially in lab situations. When trying to clear a disk with Storage Pool metadata, Storage Spaces will protect the old Storage Pool by preventing the disk from any operations such as initializing and formatting the disk.
When displaying the physical disks, you might see unhealthy disks such as this:
On the screenshot above, the physical disk list shows 48 disks on a server which only has 12 disks. The server has been part of a 4 node Storage Spaces Direct cluster. The server was reimaged without touching the 12 disks. Storage Spaces Direct still has memory of the prior Storage Pool with 48 disks.
Sequence of Steps
Step 1 – Remove any Virtual Disks
Display any existing Virtual Disks. Virtual Disks which are carried over from a different server will display with an error state.
List the existing Virtual Disks
1 | Get-VirtualDisk |
Remove the Virtual Disk
1 | Remove-VirtualDisk -FriendlyName <name> |
Step 2 – Remove the Storage Pool
List the physical disks. The disks may be in a healthy state, or they may still be unusable.
Step 3 – Reset the Disks
Run a Physical Disk Reset to set the Operational Status of the disk. This will remove the legacy pool information from the disks.
1 | Run: Get-PhysicalDisk | ? OperationalStatus -eq "Unrecognized Metadata" | Reset-PhysicalDisk |
Step 4 – Initialize Disks
If the OperationalStatus us Healthy the system is ready to use.
If there are disk drives are in an unknown state, the disks can now be initialized.
Commands to initialize disk to a raw state, set the disk to read/write, change the disk to online, then initialize the disk.
1 2 3 4 5 | Get-PhysicalDisk –CanPool $false | Get-Disk | Set-Disk -isReadOnly $false Get-PhysicalDisk –CanPool $false | Get-Disk | Set-Disk -isOffline $false Get-PhysicalDisk –CanPool $false | Get-Disk | Clear-Disk -RemoveData -Confirm:$false |
Thanks for this post. Very helpful 🙂
I was wondering if you knew how to overcome a permissiondenied error?
We took down the lab domain and tried to format the S2D disks and found we couldn’t.
Thoughts?
This generally happens if you have disks with existing Storage Pools, etc., but do not have the S2D software installed and active.
If you mount the disks to a system with an active S2D environment, you should be able to follow the procedure documented in this article.
I was trying to find how to release Disks From S2D so that it appear in Windows Disk Management, but none of the blogs have this. I discovered the command below from Microsoft site.
Syntax
Set-ClusterStorageSpacesDirectDisk -CanBeClaimed:$False -PhysicalDiskIds serial_number
Example
PS C:\> Set-ClusterStorageSpacesDirectDisk -CanBeClaimed:$False -PhysicalDiskIds “55CD2E404B75A3FC”,”50014EE05950DD7C”