I had to write a Bash script to delete duplicate files today, using their md5 hashes. I stored those hashes as files in a temporary directory:
for i in * ; do
hash=$(md5sum /tmp/msg | cut -d " " -f1) ;
if [ -f /tmp/hashes/$hash ] ;
then
echo "Deleted $i" ;
mv $i /tmp/deleted ;
else
touch /tmp/hashes/$hash ;
fi ;
done
It worked perfectly, but led me to wonder: is it a time-efficient way of doing that? I initially thought of storing the MD5 hashes in a file, but then I thought "no, because checking whether a given MD5 is in this file requires to re-read it entirely every time". Now, I wonder: is it the same when using the "create files in a directory" method? Has the Bash [ -f ] check linear, or quasi-constant complexity when there are lots of file in the same directory?
If it depends on the filesystem, what's the complexity on tmpfs?