As soon as Windows Server 2016 became generally available (GA), I reinstalled one of the HPE Apollo servers (ProLiant XL170r Gen9) available in our lab, to try out Storage Spaces Direct (S2D) with the latest bits, starting with 1 node and ready to add additional nodes to the cluster. I anticipated a quick installation as Microsoft’s storage team had sworn at Ignite, setting up S2D could be done within 15 minutes.
Beforehand, I had already prepared the HPE Smart Array P440 controller to work in HBA Mode (aka Pass-Through Mode). In HBA mode, all physical drives are directly available to the operating system and hardware RAID is disabled, which is a requirement for S2D. This can be configured in the HPE Smart Storage Administrator.
After all, Software Defined Storage (SDS) means that all hardware intelligence should be disabled to let Windows Server 2016 create a Storage Pool out of a number of physical disks (HDD, SSD, NVMe). In this configuration there are 4 x 1.2 TB SAS HDD and 2 x 480 GB SATA SSDs. One disk was used for the OS, so five disks left for Storage Spaces Direct.
Before building the S2D cluster, I checked if the disks were “poolable” by looking at the CanPool value which should be True.
At first sight, this all looked fine, so on to the cluster creation.
$ClusterAddress = "10.1.35.110"
New-Cluster –Name HVHC01 –Node HVHC01N01 –StaticIPAddress $ClusterAddress -NoStorage
To my big surprise, this second command failed with “no disks with supported bus types found to be used for S2D”. This normally happens when you try to build an S2D cluster with virtual machines, but clearly some types of storage controllers have this problem as well.