dis/advantages small large packets

I know that transmitted data is sent in small packets (100 bytes)  it is more reliable, but larger packets (10Kbytes) means the data is transmitted quicker. Are there any other dis/advantages?
Who is Participating?
One other issue is what each will do to the rest of the network

If you want users to experience better response time, you need smaller packets.  Why, because, a bunch of smaller packets will allow each user to get a little bit of data a little at a time.  Why, because even though you may have more packets queue, they are smaller can get forwarded quicker.  
With larger packets, even though the queue is smaller, it takes longer to send those large packets.

However if you want better user data throughput, you want larger packets so that you reduced the overhead of the headers.

Say I have 100,000 bytes to transfer.  For me get to the best data throughput it would be nice if I get send all 100,000 bytes in one message.  However, that assumes I am the only person on the network.  If you have a 100,000 byte message then whomever gets in line second, must wait.  However if we send out the data in 100 bytes packets, then each of our packets are interleaved.  Depending on how the application is written, I may be able to see part of the data as it come it.  Which make me perceive I am getting better response time.  Now if the application must wait for all 100,000 bytes, then I must wait.

Think of the above as checkout lines in a grocery store.  

The express lanes are the "smaller" packets and the normal lanes are the larger packets.

I can get more people though the express lanes because they have fewer items.  So I can servers more people in less time.

However, on the normal lines, I get better resource utilization because I can actually scan more items faster because I need to stop less often to start a new receipt and to accept payment.

Now for the real confusing part.  A packet is not always a packet.  You need to be careful what you are looking at and when.  Example, 10 and 100 Mbps "Ethernet" have a max frame size of 1518 bytes, 18 bytes are the "Ethernet" frame header information.  However, the IEEE standard that maps to "Ethernet" (802.3 I believe) has an additional 8 bytes of header information and so the max frame is 1492.  Then you have Gig Ethernet that can go up to just over 9,000 bytes.

Then you have Frame Relay that can go to 18K I think and ATM that uses 53 byte "cells".    You see each level within the networking protocols have their own "max" message size.

Actually, TCP uses a maximum transmit size of 1500 bytes total.  Anything larger is fragmented (chopped up) ... unless you're on a GigE network which can support "jumbo frames".

The major advantage is efficiency.  Let's say for instance that TCP adds 50 bytes overhead.  You need to transmit, lets say 100,000bytes.  So, just for protocol overhead (which is the part that has the destination, the source, the crc, the protocol type, etc. - header and footer.) that's an extra 50000 packets for a total of 150000 packets transmitted.

Now lets say you're transmitting more efficiently with larger packets which are holding 1450 bytes ot data and 50 bytes overhead.  Now you're down to only 3571 extra bytes for a total of 103571 packets transmitted - 46000 LESS packets.  So, that's 46000 less packets that have to be routed and looked up by each switch and router along the way.

(Before I get flamed - I just pulled the 50 bytes of TCP overhead out of the air - I didn't lookup the exact figure).
TCP+IP overhead is 40 bytes.

The biggest problem there has been in the past is CPU power on the end points.  Although today some of the TCP and IP overhead can be offloaded to NIC's on most systems the CPU actually need to be interrupted for each packet.  The fewer packets the less interuptions.  This is one of the reasons that 16 Mbit Token Ring could get double the througput of 10 Mbit Ethernet.  Token Ring has frame sizes of up to 18K, over 10 times that of Ethernet.  That is why gibabit Ethernet introduced jumbo frame (just over 8K).  They found that the CPU's could not keep up with the network bandwidth when they had to be intrrupted for each packet of 1500 bytes.

What is interesting now is some NIC manufactures are now allowing the IP stack to send down a datagram as big as 65K (or is it 32K) and the NIC will actually fragment it at the MAC layer to 1500 byte frames.  This is to allow even more offloading of CPU work to the NIC.  Freaked me out the 1st time I saw a trace on where the IP datagram was 50K, especially since it was on a 100 Mbps Ethernet.
Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

Cool - thanks giltjr. :)  (I was pretty close! 10 bytes more or less ;)  ).
giltjr and pseduocyber are correct.

you said "data is transmitted quicker"
it all depends on your connection speed.  You can send more smaller data packets in the time you can send on large data packet.

100, ten KB packets, or 10, one hundred Kb packets = the same size.

if you have an unreliable connection though, where packets get dropped regularly, you would want to transtmit many smaller packets, because losing one packet would not cause a significant amount of data to be lost.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.