I’m currently writing a program that needs to compare each file in an ArrayList of variable size. Right now, the way I’m doing this is through a nested code loop:
if(tempList.size()>1){
for(int i=0;i<=tempList.size()-1;i++)
//Nested loops. I should feel dirty?
for(int j=i+1;j<=tempList.size()-1;j++){
//*Gets sorted.
System.out.println(checkBytes(tempList.get(i), tempList.get(j)));
}
}
I’ve read a few differing opinions on the necessity of nested loops, and I was wondering if anyone had a more efficient alternative.
At a glance, each comparison is going to need to be done, either way, so the performance should be fairly steady, but I’m moderately convinced there’s a cleaner way to do this. Any pointers?
EDIT:: This is only a part of the function, for clarity. The files have been compared and put into buckets based on length – after going through the map of the set, and finding a bucket which is greater than one in length, it runs this. So – these are all files of the same size. I will be doing a checksum comparison before I get to bytes as well, but right now I’m just trying to clean up the loop.
Also, holy cow this site responds fast. Thanks, guys.
EDIT2:: Sorry, for further clarification: The file handling part I’ve got a decent grasp on, I think – first, I compare and sort by length, then by checksum, then by bytes – the issue I have is how to properly deal with needing to compare all files in the ArrayList efficiently, assuming they all need to be compared. If a nested loop is sufficient for this, that’s cool, I just wanted to check that this was a suitable method, convention-wise.
My answer to your EDIT2 question is in two parts
The part is that if you have a small number of files, then your nested loop approach should be fine. The performance is
O(N**2)and the optimal solution isO(N). However, ifNis small enough it won’t make much difference which approach you use. You only need to consider an alternative solution if you are sure that N can be large.The second part spells out an algorithm that exploits file hashes to get an
O(N)solution for detecting duplicates. This is what the previous answers alluded to.Create a
FileHashclass to represent file hash values. This needs to defineequals(Object)andhashCode()methods that implement byte-wise equality of the file hashes.Create a
HashMap<FileHash, List<File>>map instance.For each
Filein your inputArrayList:FileHashobject for it.FileHashin the map:(Note that the map above is really a multi-map, and that there are 3rd party implementations available; e.g. in Apache commons collections and Google collections. I’ve presented the algorithm in the form above for the sake of simplicity.)
Some performance issues:
If you use a good cryptographic hash function to generate your file hashes, then the chances of finding an entry in 3.3 that has more than one element in the list are vanishingly small, and the chances that the byte-wise comparison of the files will not say the files are equal is also vanishingly small. However, the cost of calculating the crypto hash will be greater than the cost of calculating a lower quality hash.
If you do use a lower quality hash, you can mitigate the potential cost of comparing more files by looking at the file sizes before you do the byte-wise comparison. If you do that you can make the map type
HashMap<FileHash, List<FileTuple>>whereFileTupleis a class that holds both aFileand its length.You could potentially decrease the cost of hashing by using a hash of just (say) the first block of each file. But that increases the probability that two files may have the same hash but still be different; e.g. in the 2nd block. Whether this is significant depends on the nature of the files. (But for example if you just checksummed the first 256 bytes of a collection of source code files, you could get a huge number of collisions … due to the presence of identical copyright headers!)