I have data saved in a long list. This is an example of the first six lines / records:
A <- list(c("JAMES","CHARLES","JAMES","RICHARD"),
c("JOHN","ROBERT","CHARLES"),
c("CHARLES","WILLIAM","CHARLES","MICHAEL","WILLIAM","DAVID","CHARLES","WILLIAM"),
c("CHARLES"),
c("CHARLES","CHARLES"),
c("MATTHEW","CHARLES","JACK"))
Now I would like to calculate the relative frequency with which each unique term occurs in each line / record.
Based on my example I would like to achieve an output similar to this:
[1] "JAMES" 0.5 "CHARLES" 0.25 "RICHARD" 0.25
[2] "JOHN" 0.3333333 "ROBERT" 0.3333333 "CHARLES" 0.3333333
[3] "CHARLES" 0.375 "WILLIAM" 0.375 "MICHAEL" 0.125 "DAVID" 0.125
[4] "CHARLES" 1
[5] "CHARLES" 1
[6] "MATTHEW" 0.3333333 "CHARLES" 0.3333333 "JACK" 0.3333333
So far I only know how to calculate the relative frequency of individual terms, unfortunately; e.g.:
> (sapply(A, function(x)sum(grepl("JAMES", x))))/sapply(A, length)
[1] 0.5 0.0 0.0 0.0 0.0 0.0
My example contains only ten unique terms, of course. But my actual data contains almost 200 unique terms so the approach above wouldn't be feasible. Therefore I'm looking for a different way which would allow me to calculate the relative frequency of all of the terms in just one go, please.
In addition to that I would like to sum up these relative frequencies for each unique name over all lines / records.
Based on my example above I would like to achieve an output similar to this one, please:
[1] "JAMES" 0.5
[2] "CHARLES" 3.291667
[3] "RICHARD" 0.25
[4] "JOHN" 0.3333333
[5] "ROBERT" 0.3333333
[6] "WILLIAM" 0.375
[7] "MICHAEL" 0.125
[8] "DAVID" 0.125
[9] "MATTHEW" 0.3333333
[10] "JACK" 0.3333333
Thank you very much in advance for your consideration!