I have a script that reads in data from a CSV file into a data.table
and then splits the text in one column into several new columns. I am currently using the lapply
and strsplit
functions to do this. Here's an example:
library("data.table")
df = data.table(PREFIX = c("A_B","A_C","A_D","B_A","B_C","B_D"),
VALUE = 1:6)
dt = as.data.table(df)
# split PREFIX into new columns
dt$PX = as.character(lapply(strsplit(as.character(dt$PREFIX), split="_"), "[", 1))
dt$PY = as.character(lapply(strsplit(as.character(dt$PREFIX), split="_"), "[", 2))
dt
# PREFIX VALUE PX PY
# 1: A_B 1 A B
# 2: A_C 2 A C
# 3: A_D 3 A D
# 4: B_A 4 B A
# 5: B_C 5 B C
# 6: B_D 6 B D
In the example above the column PREFIX
is split into two new columns PX
and PY
on the "_" character.
Even though this works just fine, I was wondering if there is a better (more efficient) way to do this using data.table
. My real datasets have >=10M+ rows, so time/memory efficiency becomes really important.
UPDATE:
Following @Frank's suggestion I created a larger test case and used the suggested commands, but the stringr::str_split_fixed
takes a lot longer than the original method.
library("data.table")
library("stringr")
system.time ({
df = data.table(PREFIX = rep(c("A_B","A_C","A_D","B_A","B_C","B_D"), 1000000),
VALUE = rep(1:6, 1000000))
dt = data.table(df)
})
# user system elapsed
# 0.682 0.075 0.758
system.time({ dt[, c("PX","PY") := data.table(str_split_fixed(PREFIX,"_",2))] })
# user system elapsed
# 738.283 3.103 741.674
rm(dt)
system.time ( {
df = data.table(PREFIX = rep(c("A_B","A_C","A_D","B_A","B_C","B_D"), 1000000),
VALUE = rep(1:6, 1000000) )
dt = as.data.table(df)
})
# user system elapsed
# 0.123 0.000 0.123
# split PREFIX into new columns
system.time ({
dt$PX = as.character(lapply(strsplit(as.character(dt$PREFIX), split="_"), "[", 1))
dt$PY = as.character(lapply(strsplit(as.character(dt$PREFIX), split="_"), "[", 2))
})
# user system elapsed
# 33.185 0.000 33.191
So the str_split_fixed
method takes about 20X times longer.