Hello, I need to read and write a set of data to a semi-reliable medium. So, when I need to retrieve a portion of the data I need to do an integrity check.
One way to do this would be to (a) compute a CRC of some sort and write the CRC after the data, and then when I retrieve the data, make sure it is consistent with the CRC. If so, use it; if not, treat it as bad data.
Another way would be to (b) just write the data twice in a row. Retrieve both data items, and check if they are the same.
The (b) approach is obviously more expensive in terms of storage space. But assume I don't have much data and don't care about this -- which approach is more reliable, i.e, less chance of "bad" data being mistaken for "good" data?
It seems to me that storing a complete second copy of your data (approach b) should be more reliable, simply because fewer verification bits (CRCs are shorter), no matter the method, means more chance for overlooked errors. Or... on the other hand is there some mathematical wizardry in some CRC calcs that somehow makes it more reliable than a full duplicate?
Any thoughts or insight is appreciated...