I am implementing a protocol for use in direct modem-to-modem connections.
This is the packet format:
0x7e (start of packet)
<send sequence number, acknowledged sequence, and data area 0 to 512 bytes>
0x7e (end of packet)
Where the meaning of "send sequence number" and "acknowledged sequence number" are exactly as in TCP.
The data is escaped using HDLC conventions (0x7d 0x5d, 0x7d 0x5e) but escaping is done *after* CRC calculation because the CRC itself is subject to escaping.
My question is about the most effective way to apply the CRC32. I have two possible approaches:
1. CRC the header and the data area, EXCLUDING the CRC area, then append the CRC32 to the packet. On the receiving end, do the same (checksum all but the CRC area) and do a direct comparison to the appended CRC.
2. CRC the header, the data area, *AND* the CRC (initialized to zero), ie. checksum the WHOLE packet, including the four zeros where the CRC is located, then overwrite the 4 zeros with the calculated CRC. On the receiving end, checksum the whole packet, *including* the appended CRC and look for a "magic CRC" value that will always be the result if the packet has not been corrupted.
My first thought was approach #1 was simpler and was just as effective as approach #2 might be. Then I got thinking about how CRC detects all possible one bit errors, and all possible two bit errors, etc. If I apply approach #2, I get the full effectiveness of CRC on the CRC itself. If I apply approach #1, the CRC itself is not "protected" by the CRC.
Is approach #2 stronger than approach #1? Any comments about the strongest approach, or links to online references would be great. I have already found many web pages on CRC, but none that address the checksumming of the CRC itself, and the increased effectiveness of doing so.