0

我有一个数据框 测试

group userID A_conf A_chall B_conf B_chall
1    220      1       1      1       2     
1    222      4       6      4       4     
2    223      6       5      3       2     
1    224      1       5      4       4    
2    228      4       4      4       4    

数据包含每个用户的响应(由 userID 显示),其中每个用户可以为两个度量输入 1 到 6 之间的任何值:

  • 会议
  • 挑战

他们也可以选择不响应,从而导致NA条目。

测试数据框包含几列,如 A、B、C、D 等。可以为这些列中的每一个单独报告 Conf 和 Chall 度量。

我有兴趣进行以下比较:

  • A_conf & A_chall
  • B_conf & B_chall

如果这些测量值中的任何一个相等,则最终计数器应递增(如下所示)。

group userID A_conf A_chall B_conf B_chall Final
1    220      1       1      1       2     1
1    222      4       6      4       4     1
2    223      6       5      3       2     0
1    224      1       5      4       4     1
2    228      4       4      4       4     2

我在决赛柜台上挣扎。什么脚本可以帮助我实现这个功能?

供参考,测试数据帧集的 dput 共享如下:

  • 输入(测试):

    结构(列表(组= c(1L,1L,2L,1L,2L),

    用户 ID = c(220L, 222L, 223L, 224L, 228L),

    A_conf = c(1L, 4L, 6L, 1L, 4L),

    A_chall = c(1L, 6L, 5L, 5L, 4L),

    B_conf = c(1L, 4L, 3L, 4L, 4L),

    B_chall = c(2L, 4L, 2L, 4L, 4L)),

    类 = “data.frame”, row.names = c(NA, -5L))

我尝试了这样的代码:

test$Final = as.integer(0)   # add a column to keep counts
count_inc = as.integer(0)    # counter variable to increment in steps of 1

for (i in 1:nrow(test)) {

    count_inc = 0

    if(!is.na(test$A_conf[i] == test$A_chall[i]))
    {
      count_inc = 1
      test$Final[i] = count_inc
    }#if

    else if(!is.na(test$A_conf[i] != test$A_chall[i]))
    {
      count_inc = 0
      test$Final[i] = count_inc
    }#else if
}#for

上面的代码只在A_confA​​_chall列上工作。问题是,无论输入的值(由用户)是否相等,它都会用全 1填充Final列。

4

3 回答 3

4

假设您具有相同数量的“conf”和“chall”列的基本 R 解决方案

#Find indexes of "conf" column
conf_col <- grep("conf", names(test))

#Find indexes of "chall" column
chall_col <- grep("chall", names(test))

#compare element wise and take row wise sum
test$Final <- rowSums(test[conf_col] == test[chall_col])


test
#  group userID A_conf A_chall B_conf B_chall Final
#1     1    220      1       1      1       2     1
#2     1    222      4       6      4       4     1
#3     2    223      6       5      3       2     0
#4     1    224      1       5      4       4     1
#5     2    228      4       4      4       4     2

也可以单线完成

rowSums(test[grep("conf", names(test))] == test[grep("chall", names(test))])
于 2018-12-20T11:33:11.323 回答
2

tidyverse您一起可以:

df %>%
 select(-Final) %>%
 rowid_to_column() %>% #Creating an unique row ID
 gather(var, val, -c(group, userID, rowid)) %>% #Reshaping the data
 arrange(rowid, var) %>% #Arranging by row ID and by variables
 group_by(rowid) %>% #Grouping by row ID
 mutate(temp = gl(n()/2, 2)) %>% #Creating a grouping variable for different "_chall" and "_conf" variables
 group_by(rowid, temp) %>% #Grouping by row ID and the new grouping variables
 mutate(res = ifelse(val == lag(val), 1, 0)) %>% #Comparing whether the different "_chall" and "_conf" have the same value
 group_by(rowid) %>% #Grouping by row ID
 mutate(res = sum(res, na.rm = TRUE)) %>% #Summing the occurrences of "_chall" and "_conf" being the same
 select(-temp) %>% 
 spread(var, val) %>% #Returning the data to its original form
 ungroup() %>%
 select(-rowid)

  group userID   res A_chall A_conf B_chall B_conf
  <int>  <int> <dbl>   <int>  <int>   <int>  <int>
1     1    220    1.       1      1       2      1
2     1    222    1.       6      4       4      4
3     2    223    0.       5      6       2      3
4     1    224    1.       5      1       4      4
5     2    228    2.       4      4       4      4
于 2018-12-20T11:22:52.920 回答
2

你也可以试试这个 tidyverse。与其他答案相比,行数更少;)

library(tidyverse)
d %>% 
  as.tibble() %>% 
  gather(k, v, -group,-userID) %>% 
  separate(k, into = c("letters", "test")) %>% 
  spread(test, v) %>% 
  group_by(userID) %>% 
  mutate(final = sum(chall == conf)) %>% 
  distinct(userID, final) %>% 
  ungroup() %>% 
  right_join(d)
# A tibble: 5 x 7
  userID final group A_conf A_chall B_conf B_chall
   <int> <int> <int>  <int>   <int>  <int>   <int>
1    220     1     1      1       1      1       2
2    222     1     1      4       6      4       4
3    223     0     2      6       5      3       2
4    224     1     1      1       5      4       4
5    228     2     2      4       4      4       4
于 2018-12-20T11:29:58.783 回答