我有:~100 个 txt 文件,每个有 9 列和 >100,000 行我想要的:一个组合文件,只有 2 列但所有行。那么这应该转换为> 100,000列和2行的输出。
我创建了以下函数来系统地浏览文件夹中的文件,提取我想要的数据,然后在每个文件之后,与原始模板连接在一起。
问题:这在我的小测试文件上运行良好,但是当我尝试在大文件上执行此操作时,我遇到了内存分配问题。我的 8GB RAM 还不够,我认为其中一部分是我编写代码的方式。
我的问题:有没有办法遍历文件,然后在最后一次全部加入以节省处理时间?
另外,如果这是放置这种东西的错误地方,那么有什么更好的论坛来获取 WIP 代码的输入?
##Script to pull in genotype txt files, transpose them, delete commented rows &
## & header rows, and then put files together.
library(plyr)
## Define function
Process_Combine_Genotype_Files <- function(
inputdirectory = "Rdocs/test", outputdirectory = "Rdocs/test",
template = "Rdocs/test/template.txt",
filetype = ".txt", vars = ""
){
## List the files in the directory & put together their path
filenames <- list.files(path = inputdirectory, pattern = "*.txt")
path <- paste(inputdirectory,filenames, sep="/")
combined_data <- read.table(template,header=TRUE, sep="\t")
## for-loop: for every file in directory, do the following
for (file in path){
## Read genotype txt file as a data.frame
currentfilename <- deparse(substitute(file))
currentfilename <- strsplit(file, "/")
currentfilename <- lapply(currentfilename,tail,1)
data <- read.table(file, header=TRUE, sep="\t", fill=TRUE)
#subset just the first two columns (Probe ID & Call Codes)
#will need to modify this for Genotype calls....
data.calls <- data[,1:2]
#Change column names & row names
colnames(data.calls) <- c("Probe.ID", currentfilename)
row.names(data.calls) <- data[,1]
## Join file to previous data.frame
combined_data <- join(combined_data,data.calls,type="full")
## End for loop
}
## Merge all files
combined_transcribed_data <- t(combined_data)
print(combined_transcribed_data[-1,-1])
outputfile <- paste(outputdirectory,"Genotypes_combined.txt", sep="/")
write.table(combined_transcribed_data[-1,-1],outputfile, sep="\t")
## End function
}
提前致谢。