0

我正在改进一个列出我去年编写的重复文件的脚本(如果您点击链接,请参阅第二个脚本)。

输出的记录分隔符duplicated.log是零字节而不是回车符\n。例子:

$> tr '\0' '\n' < duplicated.log
         12      dir1/index.htm
         12      dir2/index.htm
         12      dir3/index.htm
         12      dir4/index.htm
         12      dir5/index.htm

         32      dir6/video.m4v
         32      dir7/video.m4v

(在本例中,五个文件dir1/index.htm, ... 和dir5/index.htm相同md5sum,大小为 12 字节。另外两个文件dir6/video.m4vdir7/video.m4v相同md5sum,内容大小 ( du) 为 32 字节。)

由于每一行都以零字节 ( \0) 而不是回车符号 ( \n) 结束,因此空行表示为两个连续的零字节 ( \0\0)。

我使用零字节作为行分隔符,因为路径文件名可能包含回车符。

但是,这样做我遇到了这个问题:
如何“grep”指定文件的所有重复项duplicated.log
(例如如何检索重复的dir1/index.htm?)

我需要:

$> ./youranswer.sh  "dir1/index.htm"  < duplicated.log | tr '\0' '\n'
         12      dir1/index.htm 
         12      dir2/index.htm 
         12      dir3/index.htm 
         12      dir4/index.htm 
         12      dir5/index.htm 
$> ./youranswer.sh  "dir4/index.htm"  < duplicated.log | tr '\0' '\n'
         12      dir1/index.htm 
         12      dir2/index.htm 
         12      dir3/index.htm 
         12      dir4/index.htm 
         12      dir5/index.htm 
$> ./youranswer.sh  "dir7/video.m4v"  < duplicated.log | tr '\0' '\n'
         32      dir6/video.m4v 
         32      dir7/video.m4v 

我在想一些事情,比如:

awk 'BEGIN { RS="\0\0" } #input record separator is double zero byte 
     /filepath/ { print $0 }' duplicated.log  

...但filepath可能包含斜杠符号/和许多其他符号(引号、回车符...)。

我可能不得不用来perl处理这种情况......

我愿意接受任何建议、问题、其他想法……

4

2 回答 2

1

您快到了:使用匹配运算符~

awk -v RS='\0\0' -v pattern="dir1/index.htm" '$0~pattern' duplicated.log
于 2013-07-22T14:46:03.360 回答
0

我刚刚意识到我可以使用md5sum路径名而不是路径名,因为在我的新版本脚本中我保留了md5sum信息。

这是我目前使用的新格式:

$> tr '\0' '\n' < duplicated.log
     12      89e8a208e5f06c65e6448ddeb40ad879 dir1/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir2/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir3/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir4/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir5/index.htm 

     32      fc191f86efabfca83a94d33aad2f87b4 dir6/video.m4v 
     32      fc191f86efabfca83a94d33aad2f87b4 dir7/video.m4v

gawknawk给出想要的结果:

$> awk 'BEGIN { RS="\0\0" } 
   /89e8a208e5f06c65e6448ddeb40ad879/ { print $0 }' duplicated.log | 
   tr '\0' '\n'
     12      89e8a208e5f06c65e6448ddeb40ad879 dir1/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir2/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir3/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir4/index.htm 
     12      89e8a208e5f06c65e6448ddeb40ad879 dir5/index.htm 

但是我仍然对您的答案持开放态度:
-)(当前的答案只是一种解决方法)


出于好奇,在正在构建的新(可怕)脚本下方......

#!/bin/bash

fifo=$(mktemp -u) 
fif2=$(mktemp -u)
dups=$(mktemp -u)
dirs=$(mktemp -u)
menu=$(mktemp -u)
numb=$(mktemp -u)
list=$(mktemp -u)

mkfifo $fifo $fif2


# run processing in background
find . -type f -printf '%11s %P\0' |  #print size and filename
tee $fifo |                           #write in fifo for dialog progressbox
grep -vzZ '^          0 ' |           #ignore empty files
LC_ALL=C sort -z |                    #sort by size
uniq -Dzw11 |                         #keep files having same size
while IFS= read -r -d '' line
do                                    #for each file compute md5sum
  echo -en "${line:0:11}" "\t" $(md5sum "${line:12}") "\0"
                                      #file size + md5sim + file name + null terminated instead of '\n'
done |                                #keep the duplicates (same md5sum)
tee $fif2 |
uniq -zs12 -w46 --all-repeated=separate | 
tee $dups  |
#xargs -d '\n' du -sb 2<&- |          #retrieve size of each file
gawk '
function tgmkb(size) { 
  if(size<1024) return int(size)    ; size/=1024; 
  if(size<1024) return int(size) "K"; size/=1024;
  if(size<1024) return int(size) "M"; size/=1024;
  if(size<1024) return int(size) "G"; size/=1024;
                return int(size) "T"; }
function dirname (path)
      { if(sub(/\/[^\/]*$/, "", path)) return path; else return "."; }
BEGIN { RS=ORS="\0" }
!/^$/ { sz=substr($0,0,11); name=substr($0,48); dir=dirname(name); sizes[dir]+=sz; files[dir]++ }
END   { for(dir in sizes) print tgmkb(sizes[dir]) "\t(" files[dir] "\tfiles)\t" dir }' |
LC_ALL=C sort -zrshk1 > $dirs &
pid=$!


tr '\0' '\n' <$fifo |
dialog --title "Collecting files having same size..."    --no-shadow --no-lines --progressbox $(tput lines) $(tput cols)


tr '\0' '\n' <$fif2 |
dialog --title "Computing MD5 sum" --no-shadow --no-lines --progressbox $(tput lines) $(tput cols)


wait $pid
DUPLICATES=$( grep -zac -v '^$' $dups) #total number of files concerned
UNIQUES=$(    grep -zac    '^$' $dups) #number of files, if all redundant are removed
DIRECTORIES=$(grep -zac     .   $dirs) #number of directories concerned
lins=$(tput lines)
cols=$(tput cols)
cat > $menu <<EOF
--no-shadow 
--no-lines 
--hline "After selection of the directory, you will choose the redundant files you want to remove"
--menu  "There are $DUPLICATES duplicated files within $DIRECTORIES directories.\nThese duplicated files represent $UNIQUES unique files.\nChoose directory to proceed redundant file removal:"
$lins 
$cols
$DIRECTORIES
EOF
tr '\n"' "_'" < $dirs |
gawk 'BEGIN { RS="\0" } { print FNR " \"" $0 "\" " }' >> $menu

dialog --file $menu 2> $numb
[[ $? -eq 1 ]] && exit
set -x
dir=$( grep -zam"$(< $numb)" . $dirs | tac -s'\0' | grep -zam1 . | cut -f4- )
md5=$( grep -zam"$(< $numb)" . $dirs | tac -s'\0' | grep -zam1 . | cut -f2  )

grep -zao "$dir/[^/]*$" "$dups" | 
while IFS= read -r -d '' line
do
  file="${line:47}"
  awk 'BEGIN { RS="\0\0" } '"/$md5/"' { print $0 }' >> $list
done

echo -e "
fifo $fifo \t dups $dups \t menu $menu
fif2 $fif2 \t dirs $dirs \t numb $numb \t list $list"

#rm -f $fifo $fif2 $dups $dirs $menu $numb
于 2013-07-22T08:05:14.410 回答