I administer SCCM in the education field for a living. Our school division contains 43 sites, spread out over a large rural area, it takes approximately 3 hours to traverse it from its furthest points. Our main site (the main office and primary site for our SCCM infrastructure) has a 100Mbps link. All of our schools connect to our 100Mbps connection via 3Mbps links.
After considering various products, such as Altiris, SCCM, Casper, etc. Our division settled on SCCM as our systems management suite of choice. This was for the most part due to a pre-existing license agreement for which we were already paying for Client ML’s.
After consulting various sources on the internet & a certification exam…I went to work in designing the infrastructure. Infrastructure wise – the system worked flawlessly, roles deployed correctly, clients were talking to the management point…and then, we distributed our Win 7 image to our DP's (Distribution Points)…and wham, the network slowed to a grinding halt. Examination of our head-end routing appliance revealed that SCCM had consumed 90% of the bandwidth on our 100Mbps link almost instantaneously… additionally the 3Mbps connections at all sites were also pinned.
Needless to say…management was not impressed.
In consulting with Management and our Network Analyst we attempted to weigh our options… I pointed out the built in Rate Limiting features of SCCM as an option, but not understanding it fully, it did not work in the way we had hoped. QOS at the network level was what our Network Analyst thought may be the best solution. So we implemented QOS on the switch port in use by the SCCM server…however this approach proved impractical. We had found that in doing so, we were only able to limit the overall bandwidth usage coming off of the interface of the SCCM server, this was undesirable as we wanted to limit the amount of data received by our offsite locations to approximately 30% or approximately 1Mbps, in addition we wanted to push data from the head office using approximately 30% or 25-30Mbps of the bandwidth. The other issue was that the hardware appliance we were using to achieve this level of QOS was unable to consistently tag the traffic properly, as such it would intermittently work depending on how it decided to tag the traffic coming from the SCCM server.
So…I took another look at Rate limiting in SCCM. Documentation on Rate limiting is limited to a couple of pages on TechNet and even fewer discussions explaining how it truly works. Compounding these frustrations are the lack of real-world documented examples of its implementation.
Rate limiting is configured on the Rate Limits tab of the properties of a DP. It can be configured in one of two ways, Transfer rate by hour, or Pulse Mode. One catch that is barely mentioned in most discussions / documentation is that any time either method is used only one package can be concurrently sent to the throttled DP at a time, (by default 3 concurrent packages may be sent per site on non-throttled DP’s).
In conjunction with Rate Limits, the “General” tab on the Properties page of the Software Distribution Component also impacts the effectiveness of these methods. This correlation between Rate Limits and Software Distribution Components Properties is also not well documented.
Transfer rate by hour
- This method allows you to limit bandwidth to a configured percentage by hour. What is not clear is what this means…
- If a package is distributed to a DP throttled in this way and the percentage is set to 25% and it takes 8 seconds to send a chunk of the data then SCCM will yield 2 seconds, 50% = 4 second yield, 75% = 6 second yield, etc. This would reflect on a line graph as 100% usage of available bandwidth for 6, 4, and 2 seconds respectively.
- The issue with this limitation is that bandwidth sensitive applications may experience loss of connectivity if such applications need to send data during the 100% usage times
- This method allows you to send chunks of data up to 256KB in size (depending on what you configure) and control the delay between each chunk in seconds
- The Advantage to this method is that you have more granular control over the chunk size and delay between each chunk. This allows you to control the amount of data and rate at which you both send and receive data
- The disadvantage is that you can only send up to 256KB chunks
Software Distribution Component Properties
- Under the Site Configuration -> Sites node in your ConfigMgr Console, selecting Configure Site Components -> Software Distribution will take you to the properties page.
- By default “Maximum number of packages” and “Maximum threads per package” are set to 3 and 5 respectively.
- When rate limits are configured the “Maximum number of packages” setting does not apply as it is automatically throttled to 1
- As such the “Maximum threads per package” setting effectively controls the number of DP’s to which a package is simultaneously processed and sent.
Because of the granular control we were able to achieve with the combination of Pulse mode and Software Distribution Component Properties, it became the clear winner…it just required some math.
Note* - The following calculations provide estimates, when implemented actual mileage varied, but it achieved the desired bandwidth usage range:
- Bandwidth Usage Objective (Sending Site / Primary Site) = 20-30 Mbps range
- Bandwidth Usage Objective (DP) = 1.5 -> 2 Mbps range
- Bandwidth Capacity (Sending Site / Primary Site) = 100Mbps
- Bandwidth Capacity (DP) = 3Mbps
- Size of Pulse Mode Data Block -> 256KB = 2Mbps
- Delay in seconds = 1
Based on the numbers above we configured our environment to send 1 package to 17 threads at a data rate of 2Mbps. In theory this would equate to the following:
- Bandwidth Usage (Sending Site / Primary Site) -> 17threads * 2Mbps Data Block = 34Mbps
- Bandwidth Usage Objective (DP) = 1thread * 2Mbps Data Block = 2Mbps
Practically the numbers we ended up seeing were as follows:
This achieved our bandwidth goals. Using this method, multiple packages can be queued, but the packages process and send in sequence to your distribution points:
- Max send rate per hour, per site -> 1.7Mbps * 3600sec = 6120Mb
- Converting this to MB we get 765 MB per hour. So it would take approximately 5 hours to send a 3.8GB image file to 17 sites as a theoretical example. As each thread finishes another processes.
- The behavior we end up seeing is a highly efficient data transfer system for our large SCCM packages through off hours, and an efficient method of distributing package changes through Delta replication during the day, bandwidth usage never exceeds 30Mbps on our head end, and it never exceeds 1.7Mbps at our DP’s
As such the following can be used to tweak the above send rates:
- If you wish to reduce the speed at which your DP’s receive data from your Primary Site server, reduce the block size from 256KB
- If you wish to increase or decrease the rate at which you send data from your Primary Site server to your DP’s, adjust the number of threads
I hope this helps you understand the concept of rate limiting and how it functions in SCCM. After successfully grasping the concept and understanding the nuances it has helped our SCCM infrastructure function much more efficiently