SOFS / S2D terrible performance.

CaptainGiblets
CaptainGiblets used Ask the Experts™
on
Hi, i have a weird issue with performance on Storage Spaces Direct.

I have a blade server which connects over 10Gb (Not RDMA, although i am about to upgrade to RDMA switches and NICs) to my SOFS cluster which is running 2016 S2D.

I have 2 nodes, each with 2x NVME 1.6Tb drives (samsung pm1725) for cache, 4x SSDs for Performance (SAMSUNG MZILS1T6HEJH0D3) and then some 1TB disks (SEAGATE ST91000640SS)

My configuration is as below.

ObjectId                          : {1}\\S2D-S2DC\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StoragePool.ObjectId="{63c4e450-07ce-4654-bde5-6ca15a8b91fb}:SP:{e5d5357d-cca1-4a66-8384-5ea8f19fd8f8}"
PassThroughClass                  :
PassThroughIds                    :
PassThroughNamespace              :
PassThroughServer                 :
UniqueId                          : {e5d5357d-cca1-4a66-8384-5ea8f19fd8f8}
AllocatedSize                     : 43606802956288
ClearOnDeallocate                 : False
EnclosureAwareDefault             : False
FaultDomainAwarenessDefault       : StorageScaleUnit
FriendlyName                      : S2D on S2D-S2DC
HealthStatus                      : Healthy
IsClustered                       : True
IsPowerProtected                  : True
IsPrimordial                      : False
IsReadOnly                        : False
LogicalSectorSize                 : 4096
MediaTypeDefault                  : Unspecified
Name                              :
OperationalStatus                 : OK
OtherOperationalStatusDescription :
OtherUsageDescription             : Reserved for S2D
PhysicalSectorSize                : 4096
ProvisioningTypeDefault           : Fixed
ReadOnlyReason                    : None
RepairPolicy                      : Parallel
ResiliencySettingNameDefault      : Mirror
RetireMissingPhysicalDisks        : Never
Size                              : 44899588112384
SupportedProvisioningTypes        : Fixed
SupportsDeduplication             : True
ThinProvisioningAlertThresholds   : {70}
Usage                             : Other
Version                           : Windows Server 2016
WriteCacheSizeDefault             : 0
WriteCacheSizeMax                 : 18446744073709551614
WriteCacheSizeMin                 : 0
PSComputerName                    :

Open in new window


I then created 2 CSVs, however i am only focusing on CSV1 at the moment which looks like this

ObjectId                          : {1}\\S2D-S2DC\root/Microsoft/Windows/Storage/Providers_v2\SPACES_VirtualDisk.ObjectId="{63c4e450-07ce-4654-bde5-6ca15a8b91fb}:VD:{e5d5357d-cca1-4a66-8384-5ea8f19fd8f8}{84827fcc-e258-4c7a-8c98-e00f5d0f858a}"
PassThroughClass                  :
PassThroughIds                    :
PassThroughNamespace              :
PassThroughServer                 :
UniqueId                          : CC7F828458E27A4C8C98E00F5D0F858A
Access                            : Read/Write
AllocatedSize                     : 11166914969600
AllocationUnitSize                :
ColumnIsolation                   :
DetachedReason                    : None
FaultDomainAwareness              :
FootprintOnPool                   : 22333829939200
FriendlyName                      : CSV1
HealthStatus                      : Healthy
Interleave                        :
IsDeduplicationEnabled            : False
IsEnclosureAware                  :
IsManualAttach                    : True
IsSnapshot                        : False
IsTiered                          : True
LogicalSectorSize                 : 4096
MediaType                         :
Name                              :
NameFormat                        :
NumberOfAvailableCopies           :
NumberOfColumns                   :
NumberOfDataCopies                :
NumberOfGroups                    :
OperationalStatus                 : OK
OtherOperationalStatusDescription :
OtherUsageDescription             :
ParityLayout                      :
PhysicalDiskRedundancy            :
PhysicalSectorSize                : 4096
ProvisioningType                  :
ReadCacheSize                     : 0
RequestNoSinglePointOfFailure     : False
ResiliencySettingName             :
Size                              : 11166914969600
UniqueIdFormat                    : Vendor Specific
UniqueIdFormatDescription         :
Usage                             : Other
WriteCacheSize                    : 0
PSComputerName                    :

Open in new window


Storage tiers -

PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {a6dbb25a-9e46-4894-815d-673887748790}
AllocatedSize          : 8053063680000
AllocationUnitSize     : 268435456
ColumnIsolation        : PhysicalDisk
Description            :
FaultDomainAwareness   : StorageScaleUnit
FootprintOnPool        : 16106127360000
FriendlyName           : CSV1_Capacity
Interleave             : 262144
MediaType              : HDD
NumberOfColumns        : 8
NumberOfDataCopies     : 2
NumberOfGroups         : 1
ParityLayout           :
PhysicalDiskRedundancy : 1
ProvisioningType       : Fixed
ResiliencySettingName  : Mirror
Size                   : 8053063680000
Usage                  : Data

Open in new window


PassThroughClass       :
PassThroughIds         :
PassThroughNamespace   :
PassThroughServer      :
UniqueId               : {9fcda862-16dc-4af2-8469-0538f543be20}
AllocatedSize          : 3113851289600
AllocationUnitSize     : 536870912
ColumnIsolation        : PhysicalDisk
Description            :
FaultDomainAwareness   : StorageScaleUnit
FootprintOnPool        : 6227702579200
FriendlyName           : CSV1_Performance
Interleave             : 262144
MediaType              : SSD
NumberOfColumns        : 4
NumberOfDataCopies     : 2
NumberOfGroups         : 1
ParityLayout           :
PhysicalDiskRedundancy : 1
ProvisioningType       : Fixed
ResiliencySettingName  : Mirror
Size                   : 3113851289600
Usage                  : Data
PSComputerName         :

Open in new window


Each non cache disk is connected to a cache disk 1:10 in total.

On my blade that is a host i have 2x machines, a file server and a web server

These machines use the same virtual switches, are hosted on the same CSV etc I cant see any differences.

I run a diskspd with the command diskspd.exe -b64K -d10 -h -L -o8 -w30 -t2 -si -c2G io1.dat io2.dat io3.dat io4.dat io5.dat

I get wildly different results on both machines. for my web server i get between 4000MB and 7000MB normally.

Command Line: diskspd.exe -b64K -d10 -h -L -o8 -w30 -t2 -si -c2G io1.dat io2.dat io3.dat io4.dat io5.dat

Input parameters:

	timespan:   1
	-------------
	duration: 10s
	warm up time: 5s
	cool down time: 0s
	measuring latency
	random seed: 0
	path: 'io1.dat'
		think time: 0ms
		burst size: 0
		software cache disabled
		hardware write cache disabled, writethrough on
		performing mix test (read/write ratio: 70/30)
		block size: 65536
		using interlocked sequential I/O (stride: 65536)
		number of outstanding I/O operations: 8
		thread stride size: 0
		threads per file: 2
		using I/O Completion Ports
		IO priority: normal
	path: 'io2.dat'
		think time: 0ms
		burst size: 0
		software cache disabled
		hardware write cache disabled, writethrough on
		performing mix test (read/write ratio: 70/30)
		block size: 65536
		using interlocked sequential I/O (stride: 65536)
		number of outstanding I/O operations: 8
		thread stride size: 0
		threads per file: 2
		using I/O Completion Ports
		IO priority: normal
	path: 'io3.dat'
		think time: 0ms
		burst size: 0
		software cache disabled
		hardware write cache disabled, writethrough on
		performing mix test (read/write ratio: 70/30)
		block size: 65536
		using interlocked sequential I/O (stride: 65536)
		number of outstanding I/O operations: 8
		thread stride size: 0
		threads per file: 2
		using I/O Completion Ports
		IO priority: normal
	path: 'io4.dat'
		think time: 0ms
		burst size: 0
		software cache disabled
		hardware write cache disabled, writethrough on
		performing mix test (read/write ratio: 70/30)
		block size: 65536
		using interlocked sequential I/O (stride: 65536)
		number of outstanding I/O operations: 8
		thread stride size: 0
		threads per file: 2
		using I/O Completion Ports
		IO priority: normal
	path: 'io5.dat'
		think time: 0ms
		burst size: 0
		software cache disabled
		hardware write cache disabled, writethrough on
		performing mix test (read/write ratio: 70/30)
		block size: 65536
		using interlocked sequential I/O (stride: 65536)
		number of outstanding I/O operations: 8
		thread stride size: 0
		threads per file: 2
		using I/O Completion Ports
		IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time:	10.01s
thread count:		10
proc count:		6

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|  71.61%|   7.96%|   63.66%|  28.40%
   1|  91.27%|   2.18%|   89.09%|   8.74%
   2|  88.31%|   1.72%|   86.59%|  11.70%
   3|  73.49%|   5.77%|   67.71%|  26.52%
   4|  58.66%|   7.02%|   51.64%|  41.35%
   5|  61.00%|   6.24%|   54.76%|  39.01%
-------------------------------------------
avg.|  74.06%|   5.15%|   68.91%|  25.95%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      3973906432 |        60637 |     378.43 |    6054.83 |    1.319 |     1.505 | io1.dat (2048MB)
     1 |      2259091456 |        34471 |     215.13 |    3442.06 |    2.319 |     2.709 | io1.dat (2048MB)
     2 |      2177761280 |        33230 |     207.38 |    3318.14 |    2.406 |     2.798 | io2.dat (2048MB)
     3 |      3795582976 |        57916 |     361.45 |    5783.12 |    1.381 |     1.689 | io2.dat (2048MB)
     4 |      4436721664 |        67699 |     422.50 |    6759.99 |    1.181 |     1.415 | io3.dat (2048MB)
     5 |      4610785280 |        70355 |     439.08 |    7025.20 |    1.136 |     1.380 | io3.dat (2048MB)
     6 |      3938844672 |        60102 |     375.09 |    6001.41 |    1.330 |     1.532 | io4.dat (2048MB)
     7 |      2109800448 |        32193 |     200.91 |    3214.59 |    2.481 |     3.051 | io4.dat (2048MB)
     8 |      1923022848 |        29343 |     183.13 |    2930.01 |    2.727 |     3.024 | io5.dat (2048MB)
     9 |      3701932032 |        56487 |     352.53 |    5640.43 |    1.416 |     1.620 | io5.dat (2048MB)
-----------------------------------------------------------------------------------------------------
total:       32927449088 |       502433 |    3135.61 |   50169.78 |    1.591 |     2.037

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      2778398720 |        42395 |     264.58 |    4233.30 |    1.256 |     1.444 | io1.dat (2048MB)
     1 |      1585053696 |        24186 |     150.94 |    2415.06 |    2.234 |     2.639 | io1.dat (2048MB)
     2 |      1513947136 |        23101 |     144.17 |    2306.72 |    2.315 |     2.674 | io2.dat (2048MB)
     3 |      2668888064 |        40724 |     254.15 |    4066.44 |    1.316 |     1.559 | io2.dat (2048MB)
     4 |      3111452672 |        47477 |     296.30 |    4740.75 |    1.125 |     1.363 | io3.dat (2048MB)
     5 |      3207200768 |        48938 |     305.41 |    4886.64 |    1.074 |     1.301 | io3.dat (2048MB)
     6 |      2745106432 |        41887 |     261.41 |    4182.57 |    1.273 |     1.483 | io4.dat (2048MB)
     7 |      1470038016 |        22431 |     139.99 |    2239.82 |    2.410 |     3.033 | io4.dat (2048MB)
     8 |      1343029248 |        20493 |     127.89 |    2046.30 |    2.640 |     2.969 | io5.dat (2048MB)
     9 |      2586705920 |        39470 |     246.33 |    3941.22 |    1.352 |     1.568 | io5.dat (2048MB)
-----------------------------------------------------------------------------------------------------
total:       23009820672 |       351102 |    2191.18 |   35058.82 |    1.524 |     1.971

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |      1195507712 |        18242 |     113.85 |    1821.53 |    1.465 |     1.628 | io1.dat (2048MB)
     1 |       674037760 |        10285 |      64.19 |    1026.99 |    2.519 |     2.858 | io1.dat (2048MB)
     2 |       663814144 |        10129 |      63.21 |    1011.42 |    2.613 |     3.050 | io2.dat (2048MB)
     3 |      1126694912 |        17192 |     107.29 |    1716.68 |    1.534 |     1.954 | io2.dat (2048MB)
     4 |      1325268992 |        20222 |     126.20 |    2019.24 |    1.312 |     1.523 | io3.dat (2048MB)
     5 |      1403584512 |        21417 |     133.66 |    2138.57 |    1.278 |     1.535 | io3.dat (2048MB)
     6 |      1193738240 |        18215 |     113.68 |    1818.83 |    1.463 |     1.630 | io4.dat (2048MB)
     7 |       639762432 |         9762 |      60.92 |     974.77 |    2.644 |     3.085 | io4.dat (2048MB)
     8 |       579993600 |         8850 |      55.23 |     883.70 |    2.930 |     3.139 | io5.dat (2048MB)
     9 |      1115226112 |        17017 |     106.20 |    1699.21 |    1.563 |     1.727 | io5.dat (2048MB)
-----------------------------------------------------------------------------------------------------
total:        9917628416 |       151331 |     944.43 |   15110.96 |    1.747 |     2.175


  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      0.049 |      0.068 |      0.049
   25th |      0.473 |      0.565 |      0.499
   50th |      0.866 |      1.023 |      0.913
   75th |      1.783 |      2.073 |      1.867
   90th |      3.453 |      3.924 |      3.607
   95th |      5.000 |      5.573 |      5.174
   99th |      9.589 |     10.481 |      9.870
3-nines |     18.446 |     19.987 |     18.903
4-nines |     32.057 |     48.677 |     33.712
5-nines |     54.285 |     58.316 |     55.229
6-nines |     54.430 |     58.835 |     58.835
7-nines |     54.430 |     58.835 |     58.835
8-nines |     54.430 |     58.835 |     58.835
9-nines |     54.430 |     58.835 |     58.835
    max |     54.430 |     58.835 |     58.835

Open in new window


However on my file server i get really bad results. struggling to get a 1/10th of the file server.

Command Line: diskspd.exe -b64K -d10 -h -L -o8 -w30 -t2 -si -c2G io1.dat io2.dat io3.dat io4.dat io5.dat

Input parameters:

	timespan:   1
	-------------
	duration: 10s
	warm up time: 5s
	cool down time: 0s
	measuring latency
	random seed: 0
	path: 'io1.dat'
		think time: 0ms
		burst size: 0
		software cache disabled
		hardware write cache disabled, writethrough on
		performing mix test (read/write ratio: 70/30)
		block size: 65536
		using interlocked sequential I/O (stride: 65536)
		number of outstanding I/O operations: 8
		thread stride size: 0
		threads per file: 2
		using I/O Completion Ports
		IO priority: normal
	path: 'io2.dat'
		think time: 0ms
		burst size: 0
		software cache disabled
		hardware write cache disabled, writethrough on
		performing mix test (read/write ratio: 70/30)
		block size: 65536
		using interlocked sequential I/O (stride: 65536)
		number of outstanding I/O operations: 8
		thread stride size: 0
		threads per file: 2
		using I/O Completion Ports
		IO priority: normal
	path: 'io3.dat'
		think time: 0ms
		burst size: 0
		software cache disabled
		hardware write cache disabled, writethrough on
		performing mix test (read/write ratio: 70/30)
		block size: 65536
		using interlocked sequential I/O (stride: 65536)
		number of outstanding I/O operations: 8
		thread stride size: 0
		threads per file: 2
		using I/O Completion Ports
		IO priority: normal
	path: 'io4.dat'
		think time: 0ms
		burst size: 0
		software cache disabled
		hardware write cache disabled, writethrough on
		performing mix test (read/write ratio: 70/30)
		block size: 65536
		using interlocked sequential I/O (stride: 65536)
		number of outstanding I/O operations: 8
		thread stride size: 0
		threads per file: 2
		using I/O Completion Ports
		IO priority: normal
	path: 'io5.dat'
		think time: 0ms
		burst size: 0
		software cache disabled
		hardware write cache disabled, writethrough on
		performing mix test (read/write ratio: 70/30)
		block size: 65536
		using interlocked sequential I/O (stride: 65536)
		number of outstanding I/O operations: 8
		thread stride size: 0
		threads per file: 2
		using I/O Completion Ports
		IO priority: normal



Results for timespan 1:
*******************************************************************************

actual test time:	10.02s
thread count:		10
proc count:		8

CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|  30.89%|   2.50%|   28.39%|  69.11%
   1|  42.90%|   1.87%|   41.03%|  57.10%
   2|  25.43%|   1.40%|   24.02%|  74.57%
   3|  30.73%|   2.03%|   28.70%|  69.26%
   4|  35.10%|   2.96%|   32.14%|  64.90%
   5|  24.65%|   2.34%|   22.31%|  75.35%
   6|  23.56%|   4.21%|   19.34%|  76.44%
   7|  24.65%|   3.59%|   21.06%|  75.35%
-------------------------------------------
avg.|  29.74%|   2.61%|   27.12%|  70.26%

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |       543424512 |         8292 |      51.74 |     827.88 |    9.680 |    17.421 | io1.dat (2048MB)
     1 |       507314176 |         7741 |      48.30 |     772.86 |   10.360 |    18.299 | io1.dat (2048MB)
     2 |       411631616 |         6281 |      39.19 |     627.10 |   12.777 |    17.723 | io2.dat (2048MB)
     3 |       409206784 |         6244 |      38.96 |     623.40 |   12.854 |    18.188 | io2.dat (2048MB)
     4 |       438632448 |         6693 |      41.76 |     668.23 |   11.983 |    17.284 | io3.dat (2048MB)
     5 |       457900032 |         6987 |      43.60 |     697.58 |   11.482 |    16.250 | io3.dat (2048MB)
     6 |       453443584 |         6919 |      43.17 |     690.79 |   11.592 |    15.931 | io4.dat (2048MB)
     7 |       457834496 |         6986 |      43.59 |     697.48 |   11.486 |    16.087 | io4.dat (2048MB)
     8 |       758054912 |        11567 |      72.18 |    1154.85 |    6.941 |    15.962 | io5.dat (2048MB)
     9 |       717094912 |        10942 |      68.28 |    1092.45 |    7.335 |    16.408 | io5.dat (2048MB)
-----------------------------------------------------------------------------------------------------
total:        5154537472 |        78652 |     490.79 |    7852.64 |   10.202 |    17.029

Read IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |       380567552 |         5807 |      36.24 |     579.77 |    4.652 |     8.885 | io1.dat (2048MB)
     1 |       355336192 |         5422 |      33.83 |     541.33 |    4.990 |    10.374 | io1.dat (2048MB)
     2 |       288227328 |         4398 |      27.44 |     439.10 |    7.562 |    10.595 | io2.dat (2048MB)
     3 |       285343744 |         4354 |      27.17 |     434.70 |    7.416 |     9.410 | io2.dat (2048MB)
     4 |       307953664 |         4699 |      29.32 |     469.15 |    6.946 |     7.558 | io3.dat (2048MB)
     5 |       323158016 |         4931 |      30.77 |     492.31 |    6.884 |     8.482 | io3.dat (2048MB)
     6 |       320798720 |         4895 |      30.54 |     488.72 |    6.769 |     6.958 | io4.dat (2048MB)
     7 |       323289088 |         4933 |      30.78 |     492.51 |    6.860 |     7.378 | io4.dat (2048MB)
     8 |       531103744 |         8104 |      50.57 |     809.11 |    2.631 |     6.827 | io5.dat (2048MB)
     9 |       504102912 |         7692 |      48.00 |     767.97 |    2.683 |     7.660 | io5.dat (2048MB)
-----------------------------------------------------------------------------------------------------
total:        3619880960 |        55235 |     344.67 |    5514.68 |    5.343 |     8.597

Write IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |       162856960 |         2485 |      15.51 |     248.10 |   21.430 |    25.121 | io1.dat (2048MB)
     1 |       151977984 |         2319 |      14.47 |     231.53 |   22.916 |    25.320 | io1.dat (2048MB)
     2 |       123404288 |         1883 |      11.75 |     188.00 |   24.958 |    23.951 | io2.dat (2048MB)
     3 |       123863040 |         1890 |      11.79 |     188.70 |   25.381 |    25.765 | io2.dat (2048MB)
     4 |       130678784 |         1994 |      12.44 |     199.08 |   23.854 |    25.836 | io3.dat (2048MB)
     5 |       134742016 |         2056 |      12.83 |     205.27 |   22.510 |    23.506 | io3.dat (2048MB)
     6 |       132644864 |         2024 |      12.63 |     202.08 |   23.258 |    23.625 | io4.dat (2048MB)
     7 |       134545408 |         2053 |      12.81 |     204.97 |   22.601 |    23.976 | io4.dat (2048MB)
     8 |       226951168 |         3463 |      21.61 |     345.75 |   17.028 |    24.428 | io5.dat (2048MB)
     9 |       212992000 |         3250 |      20.28 |     324.48 |   18.345 |    24.394 | io5.dat (2048MB)
-----------------------------------------------------------------------------------------------------
total:        1534656512 |        23417 |     146.12 |    2337.96 |   21.662 |    24.751


  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      0.060 |      0.144 |      0.060
   25th |      1.373 |      4.434 |      1.789
   50th |      3.407 |     14.343 |      4.431
   75th |      6.330 |     32.654 |      9.986
   90th |     10.675 |     45.074 |     29.881
   95th |     15.136 |     55.919 |     40.244
   99th |     39.955 |    130.407 |     75.044
3-nines |     90.429 |    239.582 |    171.134
4-nines |    224.555 |    254.966 |    245.673
5-nines |    228.132 |    255.729 |    255.729
6-nines |    228.132 |    255.729 |    255.729
7-nines |    228.132 |    255.729 |    255.729
8-nines |    228.132 |    255.729 |    255.729
9-nines |    228.132 |    255.729 |    255.729
    max |    228.132 |    255.729 |    255.729

Open in new window


I am at a loss as to what can cause this. They are on the same CSV, same host, using the same virtual switch.

Any help with what could be causing this is greatly appreciated! Storage Spaces in the bane of my life.
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Philip ElderTechnical Architect - HA/Compute/Storage

Commented:
What is the setup for each VM? Are they identical?
Any StorageQoS policies applied to one but not the other?

Author

Commented:
Hi Phillip, thanks for responding!

Each VM is identical apart from ram and processor, the guest running slower actually has more ram + cpu assigned. All files for the VM are stored on the same CSV including the config files.

Both are running at configuration version 8 on Server 2016. There are no IOPS limits set under the VHDX settings.

There are no QoS policies applied on the S2D cluster other than the default one.

PS C:\S2D Scripts> Get-StorageQosPolicy

Name    MinimumIops MaximumIops MaximumIOBandwidth Status
----    ----------- ----------- ------------------ ------
Default 0           0           0 MB/s             Ok
Philip ElderTechnical Architect - HA/Compute/Storage

Commented:
Okay. Flip the slower VM's settings over to the same as the better performing one vCPU and vRAM wise. Run those tests again.
Ensure you’re charging the right price for your IT

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

Author

Commented:
I have changed everything to the same as the one performing well, however the speeds still have the same gulf in them.

I can copy a file to the C drive of the server with fast speeds and it flies through at around 600MB-800MB/s, however when i copy to the slow machine it struggles to even hit 100MB, it goes down to 0 and pauses for a few seconds then resumes again. It will do this several times over the course of a copy.
Philip ElderTechnical Architect - HA/Compute/Storage

Commented:
Do the VHDX file(s) share the same location? Can the slow VM's VHDX file(s) be moved to the same location/CSV as the fast one?

When the virtual drive was created was the -WriteCacheSize used for New-VirtualDisk?

Author

Commented:
I didn’t use write cache size as non of guides have it and I have nvme cache as well.   If it should use it I have space to move to one csv and recreate one and do same with second.

They currently are both on same csv.
Philip ElderTechnical Architect - HA/Compute/Storage

Commented:
The -WriteCacheSize switch sets aside physical RAM on the owner host for caching. It speeds things up depending on what is being hosted in the CSV. The catch is that that is less RAM to use for virtual machines.

Author

Commented:
I am going to try this today, i dont host VMs on my S2D servers at the moment and it has around 192Gb of ram so assigning around 40Gb shouldnt be an issue at all to both CSVs.

The only other thing i thought it could be is to do with power loss mode? As i had to set ispowerprotected to true manually myself on the storage pool. I also had to set my disks not to use write back cache as they do support power loss.
Philip ElderTechnical Architect - HA/Compute/Storage

Commented:
Note: I tried to set up a CSV on a newly stood up Server 2019 shared SAS cluster using the -writecachesize switch and kept getting a "There's not enough room in the pool to create the virtual disk" error. I ended up creating the CSV without that switch to move the project forward.

Author

Commented:
I have new 40Gb switches and NICs that are RDMA compatible (mellanox infiniband) being delivered today. So i am going to put them in tonight and see if that makes any difference.

Still very weird that machines on the same host and same CSV have different speeds though.
Philip ElderTechnical Architect - HA/Compute/Storage

Commented:
We planned to get IB going with SOFS and North-South fabric back in the day but settled on Mellanox based RoCE for RDMA instead. I don't remember the "why". :S

Author

Commented:
Early testing yet, but speeds with the 40Gb adapters in seem to be miles better that through the 10Gb that didnt support RDMA.

When running diskspd i get a constant 4GB flowing through one port.

Going to extend to a few more machines to see if it works properly before migrating my file server to the new switches and then we will see what sort of impact it has!

Author

Commented:
Something i have noticed is that under the disk properties - policies it has write caching disabled for my SSDs and NVMEs despite them having PLP. Should this be enabled or does S2D disable this and handle it on its own? I did run

Set-Storagepool clusterpool -ispowerprotected $True as it was showing as false by default.

Along with this my MPIO policy is currently set to round robin, however tonight i will set this to least blocks as from what i have read this is recommended.
Philip ElderTechnical Architect - HA/Compute/Storage

Commented:
Least Block Depth for MPIO in shared SAS Storage Spaces is the way to go.

Author

Commented:
Going to take back the comment about it working better on 40Gb network. Running this command diskspd.exe -t32 -b4k -r4k -o8 -w30 -d10 -D -L testfile.dat i am now getting around 21MB and 5499 iops. Yet on my desktop with a Samsung Evo 850 i get 287.33MB and 73557.45 IOPS.

Unless this setup just hates using a JBOD i dont know what else can be causing the issues.

Going to have to start looking at alternative solutions.
Technical Architect - HA/Compute/Storage
Commented:
At this point it's really difficult to say. Forums are great for spot help but something like this can be difficult to walk through. :(

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial