S2D CSV volumes has only 20 TB but in Failover cluster manager it shows 40 TB been consumed, Any idea it is windows 2016 with 4 nodes cluster with 2SSD and 10 SCSI on each nodes. The total size it shows 140 TB of disk size and used is 40 TB but actually it is only 20 TB. In capacity tier there is no space left and performance tier there is space but unable to create any new CSV volume. Any idea what caused the issue?
4 SSD x 745 GB = 3 TB for caching per node
10 SAS x 3.46 TB = 34 TB per per node
Total nodes are 4 x 37 TB = 146 TB.
In failover cluster manager it shows 146 TB but actually it doesn't allows 70 TB in total including both capacity and performance tier. But when checking the data size and consumable amount of space should be less but it actually allows even less than expected. Doesn't makes any sense again just for 30 TB we have S2D solution.
Is there any way to break the performance tier and assign only to the capacity tier, as a layman we expect hyperconverged solution but actually speaking it doesn't even give what is expected. Is there any way we can convert or make 70 TB usable. Client is not happy with the storage part. Please share your inputs.