1

我必须对发表在 20,000 多种期刊列表中的科学论文进行分析。我的列表有超过 450,000 条记录,但有几个重复项(例如:来自不同机构的不止一位作者的论文出现不止一次)。

好吧,我需要计算每个期刊的不同论文数量,但问题是不同的作者并不总是以相同的方式提供信息,我可以得到如下表:

JOURNAL          PAPER
0001-1231        A PRE-TEST FOR FACTORING BIVARIATE POLYNOMIALS WITH COEFFICIENTS
0001-1231        A PRETEST FOR FACTORING BIVARIATE POLYNOMIALS WITH COEFFICIENTS
0001-1231        THE P3 INFECTION TIME IS W[1]-HARD PARAMETERIZED BY THE TREEWIDTH
0001-1231        THE P3 INFECTION TIME IS W-HARD PARAMETERIZED BY THE TREEWIDTH
0001-1231        COMPOSITIONAL AND LOCAL LIVELOCK ANALYSIS FOR CSP
0001-1231        COMPOSITIONAL AND LOCAL LIVELOCK ANALYSIS FOR CSP
0001-1231        AIDING EXPLORATORY TESTING WITH PRUNED GUI MODELS
0001-1231        DECYCLING WITH A MATCHING
0001-1231        DECYCLING WITH A MATCHING
0001-1231        DECYCLING WITH A MATCHING
0001-1231        DECYCLING WITH A MATCHING.
0001-1231        DECYCLING WITH A MATCHING
0001-1231        ON THE HARDNESS OF FINDING THE GEODETIC NUMBER OF A SUBCUBIC GRAPH
0001-1231        ON THE HARDNESS OF FINDING THE GEODETIC NUMBER OF A SUBCUBIC GRAPH.
0001-1232        DECISION TREE CLASSIFICATION WITH BOUNDED NUMBER OF ERRORS
0001-1232        AN INCREMENTAL LINEAR-TIME LEARNING ALGORITHM FOR THE OPTIMUM-PATH
0001-1232        AN INCREMENTAL LINEAR-TIME LEARNING ALGORITHM FOR THE OPTIMUM-PATH 
0001-1232        COOPERATIVE CAPACITATED FACILITY LOCATION GAMES
0001-1232        OPTIMAL SUFFIX SORTING AND LCP ARRAY CONSTRUCTION FOR ALPHABETS
0001-1232        FAST MODULAR REDUCTION AND SQUARING IN GF (2 M )
0001-1232        FAST MODULAR REDUCTION AND SQUARING IN GF (2 M)
0001-1232        ON THE GEODETIC NUMBER OF COMPLEMENTARY PRISMS
0001-1232        DESIGNING MICROTISSUE BIOASSEMBLIES FOR SKELETAL REGENERATION
0001-1232        GOVERNANCE OF BRAZILIAN PUBLIC ENVIRONMENTAL FUNDS: ILLEGAL ALLOCATION
0001-1232        GOVERNANCE OF BRAZILIAN PUBLIC ENVIRONMENTAL FUNDS: ILLEGAL ALLOCATION
0001-1232        GOVERNANCE OF BRAZILIAN PUBLIC ENVIRONMENTAL FUNDS - ILLEGAL ALLOCATION

我的目标是使用类似的东西:

data%>%
distinct(JOURNAL, PAPER)%>%
group_by(JOURNAL)%>%
mutate(papers_in_journal = n())

所以,我会有这样的信息:

JOURNAL      papers_in_journal
0001-1231    6
0001-1232    7

问题是您可以在已发表论文的名称中看到一些错误。有些结尾有一个“句号”;有些有空格或替换符号;有些还有其他细微的变化,例如 W[1]-HARD 与 W-HARD。所以,如果我按原样运行代码,我所拥有的是:

JOURNAL      papers_in_journal
0001-1231    10
0001-1232    10

我的问题:有没有办法在使用 distinct() 或类似命令时考虑相似性边距,所以我可以有类似 distinct(JOURNAL, PAPER %whithin% 0.95) 的东西?

从这个意义上说,我希望命令考虑:

A PRE-TEST FOR FACTORING BIVARIATE POLYNOMIALS WITH COEFFICIENTS
=
A PRETEST FOR FACTORING BIVARIATE POLYNOMIALS WITH COEFFICIENTS

THE P3 INFECTION TIME IS W[1]-HARD PARAMETERIZED BY THE TREEWIDTH
=
THE P3 INFECTION TIME IS W-HARD PARAMETERIZED BY THE TREEWIDTH

DECYCLING WITH A MATCHING
=
DECYCLING WITH A MATCHING.

etc.

我想没有使用 distinct() 这样简单的解决方案,而且我找不到任何替代命令来做到这一点。所以,如果这是不可能的,你可以建议我可能使用的任何消歧算法,我也很感激。

谢谢你。

4

2 回答 2

2

一个选项是使用agrepwith来lapply查找≤10% 不同的期刊文章索引(默认为向量的长度,并将其全部包裹起来以选择每个期刊中“不同”文章的数量。agrepmax.distancesapplyuniquetapply

  tapply(data$PAPER, data$JOURNAL, FUN=function(x) {
      length(unique(sapply(lapply(x, function(y) agrep(y, x) ), "[", 1))
     } )

# 0001-1231 0001-1232 
#         6         8 

对于 dplyr 版本,它以更好的格式返回结果,我将上面的代码放在一个函数中,然后group_by()使用summarise().

dissimilar <- function(x, distance=0.1) {
  length(unique(sapply(lapply(x, function(y) 
     agrep(y, x, max.distance = distance) ), "[", 1)))
}

根据agrep.

library(dplyr)

data2 %>%
  group_by(JOURNAL) %>%
  summarise(n=dissimilar(PAPER))

# A tibble: 2 x 2
  JOURNAL       n
  <chr>     <int>
1 0001-1231     6
2 0001-1232     8

但是,对于更大的数据集,例如包含数千种期刊和 450,000 多篇文章的数据集,上述操作会相当慢(在我的 2.50GHz 英特尔上大约需要 10-15 分钟)。我意识到该dissimilar函数不必要地将每一行与其他每一行进行比较,这没有什么意义。理想情况下,每一行应该只与它自己和所有剩余的行进行比较。例如,第一个期刊在第 8-12 行包含 5 篇非常相似的文章。at row #8的一次使用agrep返回所有 5 个索引,因此无需将第 9-12 行与任何其他行进行比较。所以我用lapplyfor 循环替换了 450,000 行的数据集,这个过程现在只需要 2-3 分钟。

dissimilar <- function(x, distance=0.1) {
  lst <- list()               # initialise the list
  k <- 1:length(x)            # k is the index of PAPERS to compare with
  for(i in k){                # i = each PAPER, k = itself and all remaining
    lst[[i]] <- agrep(x[i], x[k], max.distance = distance) + i - 1 
                              # + i - 1 ensures that the original index in x is maintained
    k <- k[!k %in% lst[[i]]]  # remove elements which are similar
  }
  lst <- sapply(lst, "[", 1)  # take only the first of each item in the list
  length(na.omit(lst))        # count number of elements
}

现在扩展原始示例数据集,以便有 450,000 条记录,其中包含大约 18,000 种期刊,每个期刊包含大约 25 篇文章。

n <- 45000
data2 <- do.call("rbind", replicate(round(n/26), data, simplify=FALSE))[1:n,]
data2$JOURNAL[27:n] <- rep(paste0("0002-", seq(1, n/25)), each=25)[1:(n-26)]

data2 %>%
  group_by(JOURNAL) %>%
  summarise(n=dissimilar(PAPER))

# A tibble: 18,001 x 2
   JOURNAL        n
   <chr>      <int>
 1 0001-1231      6 # <-- Same
 2 0001-1232      8
 3 0002-1        14
 4 0002-10       14
 5 0002-100      14
 6 0002-1000     13
 7 0002-10000    14
 8 0002-10001    14
 9 0002-10002    14
10 0002-10003    14

# ... with 17,991 more rows

挑战在于找到一种方法来进一步加快这一过程。

于 2020-04-06T14:18:46.573 回答
0

你会想要使用一个用于自然语言处理的包。试试 quanteda 包。

于 2020-04-06T14:22:59.770 回答