对于这个非常菜鸟的问题,我很抱歉,但我对bash
编程有点陌生(几天前开始)。基本上我想要做的是保留一个文件与另一个文件的所有单词出现
我知道我可以这样做:
sort | uniq -c | sort
问题是在那之后我想获取第二个文件,再次计算出现次数并更新第一个文件。之后我拿了第三个文件等等。
我现在正在做的事情没有任何问题(我正在使用grep
,sed
和awk
),但它看起来很慢。
我很确定有一个非常有效的方法,只需要一个命令左右,使用uniq
,但我不知道。
你能带我走正确的路吗?
我还粘贴了我写的代码:
#!/bin/bash
# count the number of word occurrences from a file and writes to another file #
# the words are listed from the most frequent to the less one #
touch .check # used to check the occurrances. Temporary file
touch distribution.txt # final file with all the occurrences calculated
page=$1 # contains the file I'm calculating
occurrences=$2 # temporary file for the occurrences
# takes all the words from the file $page and orders them by occurrences
cat $page | tr -cs A-Za-z\' '\n'| tr A-Z a-z > .check
# loop to update the old file with the new information
# basically what I do is check word by word and add them to the old file as an update
cat .check | while read words
do
word=${words} # word I'm calculating
strlen=${#word} # word's length
# I use a black list to not calculate banned words (for example very small ones or inunfluent words, like articles and prepositions
if ! grep -Fxq $word .blacklist && [ $strlen -gt 2 ]
then
# if the word was never found before it writes it with 1 occurrence
if [ `egrep -c -i "^$word: " $occurrences` -eq 0 ]
then
echo "$word: 1" | cat >> $occurrences
# else it calculates the occurrences
else
old=`awk -v words=$word -F": " '$1==words { print $2 }' $occurrences`
let "new=old+1"
sed -i "s/^$word: $old$/$word: $new/g" $occurrences
fi
fi
done
rm .check
# finally it orders the words
awk -F": " '{print $2" "$1}' $occurrences | sort -rn | awk -F" " '{print $2": "$1}' > distribution.txt