4

我有一个 500x1 的单元格数组,每一行都有一个特定的单词。我如何计算单词的出现次数并显示它并显示每个出现的百分比。

例如

这些词的出现是:

Ans =

     200 Green
     200 Red
     100 Blue

这些词的百分比:

Ans = 

     40% Green
     40% Red
     20% Blue
4

4 回答 4

5

这个想法是 strcmpi 按元素比较单元矩阵。这可用于将输入名称与输入中的唯一名称进行比较。试试下面的代码。

% generate some input
input={'green','red','green','red','blue'}';

% find the unique elements in the input
uniqueNames=unique(input)';

% use string comparison ignoring the case
occurrences=strcmpi(input(:,ones(1,length(uniqueNames))),uniqueNames(ones(length(input),1),:));

% count the occurences
counts=sum(occurrences,1);

%pretty printing
for i=1:length(counts)
    disp([uniqueNames{i} ': ' num2str(counts(i))])
end

我把百分比计算留给你。

于 2012-07-19T07:25:08.643 回答
1

首先找到数据中的唯一词:

% set up sample data:
data = [{'red'}; {'green'}; {'blue'}; {'blue'}; {'blue'}; {'red'}; {'red'}; {'green'}; {'red'}; {'blue'}; {'red'}; {'green'}; {'green'}; ]
uniqwords = unique(data);

然后在数据中找到这个唯一词的出现:

[~,uniq_id]=ismember(data,uniqwords);

然后简单地计算找到每个唯一单词的次数:

uniq_word_num = arrayfun(@(x) sum(uniq_id==x),1:numel(uniqwords));

要获得百分比,请除以数据样本总数的总和:

uniq_word_perc = uniq_word_num/numel(data)
于 2012-07-19T07:30:16.903 回答
0

这是我的解决方案,应该很快。

% example input
example = 'This is an example corpus. Is is a verb?';
words = regexp(example, ' ', 'split');

%your program, result in vocabulary and counts. (input is a cell array called words)
vocabulary = unique(words);
n = length(vocabulary);
counts = zeros(n, 1);
for i=1:n
    counts(i) = sum(strcmpi(words, vocabulary{i}));
end

%process results
[val, idx]=max(counts);
most_frequent_word = vocabulary{idx};

%percentages:
percentages=counts/sum(counts);
于 2012-11-27T20:52:26.047 回答
0

不使用显式 fors 的棘手方法..

clc
close all
clear all

Paragraph=lower(fileread('Temp1.txt'));

AlphabetFlag=Paragraph>=97 & Paragraph<=122;  % finding alphabets

DelimFlag=find(AlphabetFlag==0); % considering non-alphabets delimiters
WordLength=[DelimFlag(1), diff(DelimFlag)];
Paragraph(DelimFlag)=[]; % setting delimiters to white space
Words=mat2cell(Paragraph, 1, WordLength-1); % cut the paragraph into words

[SortWords, Ia, Ic]=unique(Words);  %finding unique words and their subscript

Bincounts = histc(Ic,1:size(Ia, 1));%finding their occurence
[SortBincounts, IndBincounts]=sort(Bincounts, 'descend');% finding their frequency

FreqWords=SortWords(IndBincounts); % sorting words according to their frequency
FreqWords(1)=[];SortBincounts(1)=[]; % dealing with remaining white space

Freq=SortBincounts/sum(SortBincounts)*100; % frequency percentage

%% plot
NMostCommon=20;
disp(Freq(1:NMostCommon))
pie([Freq(1:NMostCommon); 100-sum(Freq(1:NMostCommon))], [FreqWords(1:NMostCommon), {'other words'}]);
于 2014-03-03T09:08:11.947 回答