Internet Upload/Download. Need my math checked.

I have Comcast. 20Mbs up/150Mbps down. I essentially want to know how much I can upload and download per hour. I come up with roughly 9GB upload per hour and then 62GB download per hour. Of course that is theoretical. Is my math correct?
LVL 15
LockDown32OwnerAsked:
Who is Participating?
 
David NeedhamFreelance ConsultantCommented:
That's a fair assessment.  I get about 8.8GB up and nearly 66 GB down.
0
 
JohnBusiness Consultant (Owner)Commented:
For network, I use 10 bits instead of 8 (for packets). 15 x 60 x 60 = 54 GB and 2 x 60 x 60 = 7.2 GB.  However the basis of your calculation (methodology) is quite correct.
0
 
LockDown32OwnerAuthor Commented:
Interesting. And you actually get close to theoretical? I am kind of watching an Acronis cloud backup. It took about 3 hours to back up 50GB of data so I would have to assume that the backup software was getting decent compression.  So if you don't get close to theoretical (and restoring isn't even close to 66GB/hr) then probably an Acronis issue?

(Unless I have reached the 1TB limit for Comcast this month LOL!)
0
WEBINAR: 10 Easy Ways to Lose a Password

Join us on June 27th at 8 am PDT to learn about the methods that hackers use to lift real, working credentials from even the most security-savvy employees. We'll cover the importance of multi-factor authentication and how these solutions can better protect your business!

 
JohnBusiness Consultant (Owner)Commented:
Theoretical data transfer rates are just that and only that. Backup / restore takes CPU and disk time, so that would be quite a bit lower than unhampered theory.
0
 
David NeedhamFreelance ConsultantCommented:
There will be many potential factors:
Acronis itself, as you say.
Resources on the machine controlling the restore.
Resources at the other end ( hardware / bandwidth )
to name a few.
0
 
Ben Personick (Previously QCubed)Lead Network EngineerCommented:
(20/8) 2.5MB/s *3600 = 9000 MB/Hour / 1024 = 8.79GB/Hour

((150/8)*3600 )/1024 = 65.91 GB/H

math checks
0
 
Ben Personick (Previously QCubed)Lead Network EngineerCommented:
FYI, you will only be able to saturate your bandwidth with a UDP file transfer like the one available from CERN, or IBM's "Aspera".  TCP has a lot of overhead and the more latency the less throughput.
0
 
Gerald ConnollyCommented:
Ben, You need to use 10 bit bytes for any kind of network links

so (150/10)*3600)/1024 = 52.73GB/hr
0
 
Ben Personick (Previously QCubed)Lead Network EngineerCommented:
he asked for theoretical throughput, you only use 8 for actual theoretical througjput.

  The value of 10 could be used to adjust for some effects of latency and overhead for your traffic that you might want to take into account, but it's far from being a canonical method for doing so.  depending on several factors you could be quite a bit off from a real achievable bandwidth using 8 or 10.

  for instance whether you have high end equipment which garunteed line-speed transmissions, low or high latency, whether you can utilize jumbo frames, or an alternative protocol that better matches your data being sent such as ATM for very small frequent transmissions, or in the most typical of scenarios the "type" of traffic (telnet?  SSH?  FTP? HTTP(S)?), and whether you arw using a UDP or TCP protocol.

The OP just asked for ballpark theoretical values, you might have 1Gbit/s bandwidth and due to overhead from various places, especially latency, get only 1mb/s transfer by FTP to a remote site as a max.
0
 
LockDown32OwnerAuthor Commented:
I asked for ballpark for just that reason. An exact number is not doable. I have also heard that when you talk hard drive I/O you need to use 10-bit per byte too. Thanks guys.
0
 
JohnBusiness Consultant (Owner)Commented:
You are very welcome and I was happy to help you with this.
0
 
Ben Personick (Previously QCubed)Lead Network EngineerCommented:
glad to help :)
0
 
Gerald ConnollyCommented:
Ben and Lockdown

Storage (Spinning Rust or SSD) uses 8 bit bytes as a unit of transfer and any overhead is due to RAID and Redundancy

That said, all comms links including FibreChannel use 10 bit bytes for transferring data, not for any redundancy purposes or for calculating latency, its about making reliable decyphering of signals on the wire
0
 
Ben Personick (Previously QCubed)Lead Network EngineerCommented:
Hey gerold,

  The question is about network equipment which also uses 8 bit bytes.

  However network communication has an overhead, every packet and frame have headers which count as data transmitted, as do the normal tcp challenge response properties etc.

  That overhead counts as bits and bytes transmitted reducing the possible throughput that can be dedicated to the data itself.

  That is why you can use a very quick and dirty general fudged coversion of data to be transmitted as if it has 10 bits per byte, while It isn't a very accurate approximation, it will work for generalized scenarios.

  So the actual usage you will be billed for transfering through your service will actually be higher than thw data you intended to transmit as it sits on disk, always, 100% of the time, so you kill two birds with one stone to multiple the data bytes by 10 instead of 8 to get a fudged estimate of the data which will be reported as transmitted as well as throughput.

  throughput itself however is still calculated 8bits/byte and should take into account many more factors, and data sent and transfer times specifically can vary greatly from this fudged value so for accurate accounting you would take many more factors into account and come out with quite separate values on data transmitted and maximum expected throughput to transfer it.
0
 
Ben Personick (Previously QCubed)Lead Network EngineerCommented:
Also in your analogy with disks, there are more factors in play for throughput in reading and writing to the device other than RAID level which affrct latency, throughput, and amount of data that is written/read to/from disk, and how much space the data takes up on the disk.




s
0
 
Gerald ConnollyCommented:
Ben i think you are missing the point
i am not talking about the packet overhead, this is at the wire level, where bytes are upscaled from 8 bits to 10 bits before transmitting on the wire to make sure the receiving end can decipher the incoming bit stream. https://en.wikipedia.org/wiki/8b/10b_encoding
0
 
Ben Personick (Previously QCubed)Lead Network EngineerCommented:
The concept you're talking about is not relevant to the discussion, and would not affect throughput if it were.

Feel free to continue to post on the topic if you like.
0
 
Gerald ConnollyCommented:
Of course its relevant!
We were talking about capacities of a link, its hardware bit rate is fixed, so using 10 bits per byte, obviously lowers the throughput against using 8 bits per byte. Its the same reason that Gbit links like FibreChannel are rated as 100MBytes/s per Gbit/s (ie 1Gb/s=100MB/s, 10Gb/s=1GB/s) its not just about the overhead its also about the 8b10b encoding, although for FC 10Gb and above they use 64b66b encoding
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.