The Maximum Segment size (MSS) is an important consideration when troubleshooting connectivity via the Internet/Intranet. As the packets are routed via the Internet/Intranet, the packets must traverse through multiple routers in the path between two hosts. In a perfect world, each TCP segment can pass through every router without being fragmented. If the segment size is too large for any of the routers through which the data traverses, the segments that are too large must be fragmented. For most End-Users, the MSS is set automatically by the operating system, but as you will see in the article, a router/firewall can modify the packet and change the MSS. The MSS is easily calculated from just a few known variables; you can use this information to predict what data packets would look like once they were captured from such as TCPDUMP or Wireshark.
A few weeks ago I encountered a very interesting issue while capturing data with tcpdump in my lab for a research project that I was working; the affects of IPv4 packet exchange between two hosts. What I observed was the MSS in the original packet that was transmitted from host A to host B was being modified; I expected the MSS of the initial SYN packet from host A to be 1460 and the SYN ACK packet from Host B to be 1460. This however, was not the case as I had observed the MSS on the initial SYN packet from Host A as 1350 when sending a packet to Host B utilizing TCP on port 80:
1. Host A SYN bit sent with MSS of 1460
2. SYN bit received by Host B with MSS of 1350
If you do not already know how these values are calculated and used then I will explain the basics. One would think that these values are negotiated between two hosts when in fact they are not, they are announced. In one of my articles, I briefly covered MTU and MSS and per RFC 879 the MSS is announced on every packet that has the SYN bit set. MSS It is the largest chunk of data that a host will send to another host on a network utilizing TCP. For optimum performance, the number of bytes in the segment and the headers should not add up to more than the bytes of that of the Maximum Transmission Unit (MTU); MTU is the maximum data that can be sent from a layer 2 perspective (not accounting for layer 2 overhead), in 95 percent of all cases, the MTU is normally 1500 bytes (Ethernet for example [PDU in this case is called a frame]).
Also, I would like add that data segments were actually 1338 bytes while observing the output from tcpdump. This was making it really hard to figure what was going on. I figured since the MSS was 1350 and the fact that I was using timestamps was the reason I was getting 1338 bytes for many of the data segments. In theory, my MTU would have been 1390:
However, looking at the MTU on all interfaces, the interfaces were configured for 1500 bytes, So, like I said, it was a little interesting.
The default gateway for Host A was a Juniper SSG5; so I moved my attention there. After reviewing the config, I could not really find anything that was obvious (I had searched the config trying to match on 1390 or 1350), but nothing was configured with those values. After reviewing the config line by line, I did however, find that there was a line in the config of 'set flow tcp-mss'. I focused all my attention on that command as there was not a value defined; typically, when a command is defined with no value, there is a default value that is used. It was very possible that 1350 was the default value for that command; Lo and behold per Juniper KB, when using that command with no value defined, the default value is 1350. Also, the 'set flow tcp-mss' is on by default!
There are many reasons to manipulate the MSS of a packet; however, I was not using any other technologies other than Ethernet that needed to have the MSS modified in the packet. Not that the additional 110 bytes would have helped me any, but I knew what I was expecting to see in the packet capture and it just through me off a bit. I thought I would share my experience!