A custom application requires the frequent transfer of large files over the wire. After using file metadata to decide whether a given file has been modified and needs to be updated on the other side, I want to accomplish the actual transfer in, say, megabyte-sized fragments, but I also want to skip every fragment that the other end has already seen during previous transfers in order to avoid the bandwidth usage and save time completing the operation. Having both sides compare a known hash for each segment may be a starting point, but because I cannot know what the files actually are (video, binary executable images, photos, regular documents, emails, etc.) I also cannot guarantee that a particular segment of the file might have been modified but, improbably enough, yields the same hash value. If I were to compare the results of two hash algorithms, would this be a silver bullet against false positives and ensure that no changes were indeed made to any particular file segment? That is, could the theoretical change yielding the same result using an SHA256 hash result also result in an MD5 hash of the same changed data yielding the same MD5 result? I realize that the 100% response is that it's still theoretically possible, but will this put me sufficiently close enough to infinity?