47

我有一个这样的文件:

This is a file with many words.
Some of the words appear more than once.
Some of the words only appear one time.

我想生成一个两列的列表。第一列显示出现的单词,第二列显示它们出现的频率,例如:

this@1
is@1
a@1
file@1
with@1
many@1
words3
some@2
of@2
the@2
only@1
appear@2
more@1
than@1
one@1
once@1
time@1 
  • 为了使这项工作更简单,在处理列表之前,我将删除所有标点符号,并将所有文本更改为小写字母。
  • 除非有一个简单的解决方案,words并且word可以算作两个单独的单词。

到目前为止,我有这个:

sed -i "s/ /\n/g" ./file1.txt # put all words on a new line
while read line
do
     count="$(grep -c $line file1.txt)"
     echo $line"@"$count >> file2.txt # add word and frequency to file
done < ./file1.txt
sort -u -d # remove duplicate lines

出于某种原因,这仅在每个单词后显示“0”。

如何生成文件中出现的每个单词的列表以及频率信息?

4

12 回答 12

74

不是sedgrep, 但是tr, sort, uniq, 和awk:

% (tr ' ' '\n' | sort | uniq -c | awk '{print $2"@"$1}') <<EOF
This is a file with many words.
Some of the words appear more than once.
Some of the words only appear one time.
EOF

a@1
appear@2
file@1
is@1
many@1
more@1
of@2
once.@1
one@1
only@1
Some@2
than@1
the@2
This@1
time.@1
with@1
words@2
words.@1

在大多数情况下,您还希望删除数字和标点符号,将所有内容转换为小写(否则“THE”、“The”和“the”将分别计算)并禁止输入零长度单词。对于 ASCII 文本,您可以使用修改后的命令完成所有这些操作:

sed -e  's/[^A-Za-z]/ /g' text.txt | tr 'A-Z' 'a-z' | tr ' ' '\n' | grep -v '^$'| sort | uniq -c | sort -rn
于 2012-05-11T14:05:35.470 回答
47

uniq -c已经做了你想要的,只需对输入进行排序:

echo 'a s d s d a s d s a a d d s a s d d s a' | tr ' ' '\n' | sort | uniq -c

输出:

  6 a
  7 d
  7 s
于 2014-10-02T22:37:17.390 回答
12

您可以为此使用 tr,只需运行

tr ' ' '\12' <NAME_OF_FILE| sort | uniq -c | sort -nr > result.txt

城市名称文本文件的示例输出:

3026 Toronto
2006 Montréal
1117 Edmonton
1048 Calgary
905 Ottawa
724 Winnipeg
673 Vancouver
495 Brampton
489 Mississauga
482 London
467 Hamilton
于 2018-11-26T12:21:17.493 回答
7

输入文件的内容

$ cat inputFile.txt
This is a file with many words.
Some of the words appear more than once.
Some of the words only appear one time.

使用sed | sort | uniq

$ sed 's/\.//g;s/\(.*\)/\L\1/;s/\ /\n/g' inputFile.txt | sort | uniq -c
      1 a
      2 appear
      1 file
      1 is
      1 many
      1 more
      2 of
      1 once
      1 one
      1 only
      2 some
      1 than
      2 the
      1 this
      1 time
      1 with
      3 words

uniq -ic将计数并忽略大小写,但结果列表将具有This而不是this.

于 2012-05-13T15:54:45.757 回答
5

让我们使用 AWK!

此函数按降序列出所提供文件中每个单词出现的频率:

function wordfrequency() {
  awk '
     BEGIN { FS="[^a-zA-Z]+" } {
         for (i=1; i<=NF; i++) {
             word = tolower($i)
             words[word]++
         }
     }
     END {
         for (w in words)
              printf("%3d %s\n", words[w], w)
     } ' | sort -rn
}

您可以像这样在您的文件中调用它:

$ cat your_file.txt | wordfrequency

资料来源:AWK-ward Ruby

于 2014-12-15T22:58:00.023 回答
4

这可能对您有用:

tr '[:upper:]' '[:lower:]' <file |
tr -d '[:punct:]' |
tr -s ' ' '\n' | 
sort |
uniq -c |
sed 's/ *\([0-9]*\) \(.*\)/\2@\1/'
于 2012-05-11T14:49:30.217 回答
2

让我们用 Python 3 来做吧!

"""Counts the frequency of each word in the given text; words are defined as
entities separated by whitespaces; punctuations and other symbols are ignored;
case-insensitive; input can be passed through stdin or through a file specified
as an argument; prints highest frequency words first"""

# Case-insensitive
# Ignore punctuations `~!@#$%^&*()_-+={}[]\|:;"'<>,.?/

import sys

# Find if input is being given through stdin or from a file
lines = None
if len(sys.argv) == 1:
    lines = sys.stdin
else:
    lines = open(sys.argv[1])

D = {}
for line in lines:
    for word in line.split():
        word = ''.join(list(filter(
            lambda ch: ch not in "`~!@#$%^&*()_-+={}[]\\|:;\"'<>,.?/",
            word)))
        word = word.lower()
        if word in D:
            D[word] += 1
        else:
            D[word] = 1

for word in sorted(D, key=D.get, reverse=True):
    print(word + ' ' + str(D[word]))

让我们将此脚本命名为“frequency.py”并在“~/.bash_aliases”中添加一行:

alias freq="python3 /path/to/frequency.py"

现在要在文件“content.txt”中查找频率词,您可以:

freq content.txt

您还可以将输出通过管道传递给它:

cat content.txt | freq

甚至分析来自多个文件的文本:

cat content.txt story.txt article.txt | freq

如果您使用的是 Python 2,只需替换

  • ''.join(list(filter(args...)))filter(args...)
  • python3python
  • print(whatever)print whatever
于 2016-05-21T07:41:29.423 回答
1

排序需要 GNU AWK ( gawk)。如果您有另一个没有 AWK 的 AWK asort(),则可以轻松对其进行调整,然后通过管道传输到sort.

awk '{gsub(/\./, ""); for (i = 1; i <= NF; i++) {w = tolower($i); count[w]++; words[w] = w}} END {qty = asort(words); for (w = 1; w <= qty; w++) print words[w] "@" count[words[w]]}' inputfile

分成多行:

awk '{
    gsub(/\./, ""); 
    for (i = 1; i <= NF; i++) {
        w = tolower($i); 
        count[w]++; 
        words[w] = w
    }
} 
END {
    qty = asort(words); 
    for (w = 1; w <= qty; w++)
        print words[w] "@" count[words[w]]
}' inputfile
于 2012-05-12T01:49:08.220 回答
1

如果我的 file.txt 中有以下文本。

This is line number one
This is Line Number Tow
this is Line Number tow

我可以使用以下 cmd 找到每个单词的频率。

 cat file.txt | tr ' ' '\n' | sort | uniq -c

输出 :

  3 is
  1 line
  2 Line
  1 number
  2 Number
  1 one
  1 this
  2 This
  1 tow
  1 Tow
于 2020-08-15T09:25:11.620 回答
1

这是一个更复杂的任务。我们至少需要考虑以下几点:

  • 删除标点符号;天空不同于天空。还是天空?
  • 地球与地球不同,上帝与上帝不同,月亮与月亮不同,但 The 和 the 被认为是相同的。因此,是否将单词小写是值得怀疑的。
  • 我们必须考虑 BOM 字符
$ file the-king-james-bible.txt 
the-king-james-bible.txt: UTF-8 Unicode (with BOM) text

BOM 是文件中的第一个元字符。如果不删除,它可能会错误地影响一个单词。

以下是使用 AWK 的解决方案。

    {  

        if (NR == 1) { 
            sub(/^\xef\xbb\xbf/,"")
        }

        gsub(/[,;!()*:?.]*/, "")
    
        for (i = 1; i <= NF; i++) {
    
            if ($i ~ /^[0-9]/) { 
                continue
            }
    
            w = $i
            words[w]++
        }
    } 
    
    END {
    
        for (idx in words) {
    
            print idx, words[idx]
        }
    }

它删除 BOM 字符并替换标点符号。它不会小写单词。此外,由于该程序用于计算圣经的单词,它会跳过所有经文(if 条件与 continue)。

$ awk -f word_freq.awk the-king-james-bible.txt > bible_words.txt

我们运行程序并将输出写入文件。

$ sort -nr -k 2 bible_words.txt | head
the 62103
and 38848
of 34478
to 13400
And 12846
that 12576
in 12331
shall 9760
he 9665
unto 8942

使用sorthead,我们可以找到圣经中最常用的十个单词。

于 2021-08-05T12:29:50.907 回答
0
#!/usr/bin/env bash

declare -A map 
words="$1"

[[ -f $1 ]] || { echo "usage: $(basename $0 wordfile)"; exit 1 ;}

while read line; do 
  for word in $line; do 
    ((map[$word]++))
  done; 
done < <(cat $words )

for key in ${!map[@]}; do 
  echo "the word $key appears ${map[$key]} times"
done|sort -nr -k5
于 2018-02-18T14:23:35.927 回答
-1
  awk '{ 
       BEGIN{word[""]=0;}
    {
    for (el =1 ; el <= NF ; ++el) {word[$el]++ }
    }
 END {
 for (i in word) {
        if (i !="") 
           {
              print word[i],i;
           }
                 }
 }' file.txt | sort -nr
于 2019-02-20T14:42:51.357 回答