我遇到了一些我无法弄清楚的奇怪的德雷克行为。我正在尝试在.rmd
我的德雷克计划中添加一个。我正在使用远程机器和该机器上的网络驱动器。如果我尝试将 .rmd 文件添加到我的计划中,如下所示:
> library(drake)
> library(rmarkdown)
>
> list.files()
[1] "drake_testing.Rproj" "foo.png" "report.Rmd"
>
> plan <- drake_plan(
+ png("foo.png"),
+ plot(iris$Sepal.Length ~ iris$Sepal.Width),
+ dev.off(),
+ report = render(
+ input = knitr_in("report.Rmd"),
+ output_file = "report.html",
+ quiet = TRUE
+ )
+
+ )
>
> plan
# A tibble: 4 x 2
target command
<chr> <expr>
1 drake_target_1 png("foo.png")
2 drake_target_2 plot(iris$Sepal.Length ~ iris$Sepal.Width)
3 drake_target_3 dev.off()
4 report render(input = knitr_in("report.Rmd"), output_file = "report.html", quiet = TRUE)
>
> ## Turn your plan into a set of instructions
> config <- drake_config(plan)
Error: The specified file is not readable: report.Rmd
>
> traceback()
13: stop(txt, obj, call. = FALSE)
12: .errorhandler("The specified file is not readable: ", object,
mode = errormode)
11: digest::digest(object = file, algo = config$hash_algorithm, file = TRUE,
serialize = FALSE)
10: rehash_file(file, config)
9: rehash_storage(target = target, file = file, config = config)
8: FUN(X[[i]], ...)
7: lapply(X = X, FUN = FUN, ...)
6: weak_mclapply(X = keys, FUN = FUN, mc.cores = jobs, ...)
5: lightly_parallelize_atomic(X = X, FUN = FUN, jobs = jobs, ...)
4: lightly_parallelize(X = knitr_files, FUN = storage_hash, jobs = config$jobs,
config = config)
3: cdl_get_knitr_hash(config)
2: create_drake_layout(plan = plan, envir = envir, verbose = verbose,
jobs = jobs_preprocess, console_log_file = console_log_file,
trigger = trigger, cache = cache)
1: drake_config(plan)
我尝试了以下排列来完成这项工作:
- 将 移动
.rmd
到本地驱动器并使用完整路径调用它 - 添加
file.path
内部和外部knitr_in
以完成完整路径。 - 尝试使用
file_in
上述每个场景。
我也尝试过调试,但是当德雷克将文件名转换为哈希然后将其转换回文件的基本名称时,我有点迷失了(即report.Rmd
)。错误最终在digest::digest
被调用时发生。
有没有人有尝试弄清楚这样的事情的经验?