# How to compare parts of files by hash?

sinned 12/06/2018. 7 answers, 2.751 views

I have one successfully downloaded file and another failed download (only the first 100 MB of a large file) which I suspect is the same file.

To verify this, I'd like to check their hashes, but since I only have a part of the unsuccessfully downloaded file, I only want to hash the first few megabytes or so.

How do I do this?

OS would be windows, but I have cygwin and MinGW installed.

Creating hashes to compare files makes sense if you compare one file against many, or when comparing many files against each other.

It does not make sense when comparing two files only once: The effort to compute the hashes is at least as high as walking over the files and comparing them directly.

An efficient file comparison tool is cmp:

cmp --bytes $((100 * 1024 * 1024)) file1 file2 && echo "File fragments are identical" You can also combine it with dd to compare arbitrary parts (not necessarily from the beginning) of two files, e.g.: cmp \ <(dd if=file1 bs=100M count=1 skip=1 2>/dev/null) \ <(dd if=file2 bs=100M count=1 skip=1 2>/dev/null) \ && echo "File fragments are identical" davidbaumann 12/06/2018. I am sorry I can't exactly try that, but this way will work dd if=yourfile.zip of=first100mb1.dat bs=100M count=1 dd if=yourotherfile.zip of=first100mb2.dat bs=100M count=1 This will get you the first 100 Megabyte of both files. Now get the hashes: sha256sum first100mb1.dat && sha256sum first100mb2.dat  You can also run it directly: dd if=yourfile.zip bs=100M count=1 | sha256sum dd if=yourotherfile.zip bs=100M count=1 | sha256sum  Xen2050 12/06/2018. You could just directly compare the files, with a binary / hex diff program like vbindiff. It quickly compares files up to 4GB on Linux & Windows. Looks something like this, only with the difference highlighted in red (1B vs 1C): one 0000 0000: 30 5C 72 A7 1B 6D FB FC 08 00 00 00 00 00 00 00 0\r..m.. ........ 0000 0010: 00 00 00 00 .... 0000 0020: 0000 0030: 0000 0040: 0000 0050: 0000 0060: 0000 0070: 0000 0080: 0000 0090: 0000 00A0: two 0000 0000: 30 5C 72 A7 1C 6D FB FC 08 00 00 00 00 00 00 00 0\r..m.. ........ 0000 0010: 00 00 00 00 .... 0000 0020: 0000 0030: 0000 0040: 0000 0050: 0000 0060: 0000 0070: 0000 0080: 0000 0090: 0000 00A0: ┌──────────────────────────────────────────────────────────────────────────────┐ │Arrow keys move F find RET next difference ESC quit T move top │ │C ASCII/EBCDIC E edit file G goto position Q quit B move bottom │ └──────────────────────────────────────────────────────────────────────────────┘  Tonny 12/07/2018. Everybody seems to go the Unix/Linux route with this, but just comparing 2 files can easily be done with Windows standard commands: FC /B file file2 FC is present on every Windows NT version ever made. And (if I recall correctly) was also present in DOS. It is a bit slow, but that doesn't matter for a one-time use. Blerg 12/08/2018. I know it says for Bash, but OP also states that they have Windows. For anyone that wants/requires a Windows solution, there's a program called HxD which is a Hex Editor that can compare two files. If the files are different sizes, it will tell if the available parts are the same. And if need be, it's capable of running checksums for whatever is currently selected. It's free and can be downloaded from: the HxD website. I don't have any connection to the author(s), I've just been using it for years. Jim L. 12/12/2018. cmp will tell you when two files are identical up to the length of the smaller file: $ dd if=/dev/random bs=8192 count=8192 > a
8192+0 records in
8192+0 records out
67108864 bytes transferred in 0.514571 secs (130417197 bytes/sec)
$cp a b$ dd if=/dev/random bs=8192 count=8192 >> b
8192+0 records in
8192+0 records out
67108864 bytes transferred in 0.512228 secs (131013601 bytes/sec)
\$ cmp a b
cmp: EOF on a

cmp is telling you that the comparison encountered an EOF on file a before it detected any difference between the two files.

user48918 12/07/2018.

If you can access a shell session the remote system, then you can break the source file up into pieces using the split command. To split a big file into (binary) bits of one million bytes or less each:

split -b 1000000 bigfile.tgz will create pieces xaa xab etc. From there it is trivial to concatenate the pieces to reconstruct the file:

cat x?? > reconstructed_bigfile.tgz Of course you have control over the names of the file components. I am just illustrating using the defaults.