3

我刚刚开始使用snakemake,想知道在同一个文件上运行一组参数的“正确”方法是什么,以及这将如何用于规则链?

因此,例如,当我想要有多种归一化方法时,然后假设一个具有不同数量的 k 个集群的集群规则。执行此操作以运行所有组合的最佳方法是什么?

我开始这样做:

INFILES = ["mytable"]

rule preprocess:
input:
    bam=expand("data/{sample}.csv", sample=INFILES, param=config["normmethod"])

output:
    bamo=expand("results/{sample}_pp_{param}.csv", sample=INFILES, param=config["normmethod"])

script:
    "scripts/preprocess.py"

然后通过以下方式调用脚本:

snakemake --config normmethod=中位数

但这并不能真正扩展到工作流程后期的更多选项。例如,我将如何自动合并这些选项集?

normmethods= ["Median", "Quantile"]
kclusters= [1,3,5,7,10]
4

3 回答 3

8

您在规则中使用函数 expand() 做得很好。

对于参数,我建议使用包含所有参数的配置文件。Snakemake 适用于 YAML 和 JSON 文件。在这里,您可以获得有关这两种格式的所有信息:

在您的情况下,您只需在 YAML 文件中编写以下内容:

INFILES : "mytables"

normmethods : ["Median", "Quantile"] 
or
normmethods : - "Median"
              - "Quantile"

kclusters : [1,3,5,7,10]
or
kclusters : - 1
            - 3
            - 5
            - 7
            - 10

像这样写你的规则:

rule preprocess:
input:
    bam = expand("data/{sample}.csv",
                 sample = config["INFILES"])

params :
    kcluster = config["kcluster"]

output:
    bamo = expand("results/{sample}_pp_{method}_{cluster}.csv",
                  sample = config["INFILES"],
                  method = config["normmethod"],
                  cluster = config["kcluster"])

script:
    "scripts/preprocess.py {input.bam} {params.kcluster}"

然后你只需要像这样吃午饭:

snakemake --configfile  path/to/config.yml

为了与其他参数一起运行,您将不得不修改您的配置文件而不是您的蛇文件(减少错误),这对可读性和代码美感更好。

编辑 :

  rule preprocess:
    input:
      bam = "data/{sample}.csv"

只是为了纠正我自己的错误,您不需要在此处对输入使用扩展,因为您只想将规则一文件 .csv 逐一运行。所以只要把通配符放在这里,Snakemake 就会发挥作用。

于 2017-02-13T16:42:17.477 回答
5

似乎您没有将参数传递给您的脚本。像下面这样的东西怎么样?

import re
import os
import glob
normmethods= ["Median", "Quantile"] # can be set from config['normmethods']    
kclusters= [1,3,5,7,10]             # can be set from config['kclusters']
INFILES = ['results/' + re.sub('\.csv$', '_pp_' + m + '-' + str(k) + '.csv', re.sub('data/', '', file)) for file in glob.glob("data/*.csv") for m in normmethods for k in kclusters]

rule cluster:
    input: INFILES

rule preprocess:
    input:
        bam="data/{sample}.csv"
    output:
        bamo="results/{sample}_pp_{m}-{k}.csv"
    run:     
        os.system("scripts/preprocess.py %s %s %s %s" % (input.bame, output.bamo, wildcards.m, wildcards.k))
于 2017-01-20T17:54:36.740 回答
1

这个答案类似于@Shiping 的答案,即在output规则中使用通配符来实现每个输入文件的多个参数。但是,此答案提供了更详细的示例,并避免使用复杂的列表理解、正则表达式或glob模块。

@Pereira Hugo 的方法使用一项作业为一个输入文件运行所有参数组合,而此答案中的方法使用一项作业为一个输入文件运行一个参数组合,这使得在一个输入文件上并行执行每个参数组合变得更加容易输入文件。

Snakefile

import os

data_dir = 'data'
sample_fns = os.listdir(data_dir)
sample_pfxes = list(map(lambda p: p[:p.rfind('.')],
                        sample_fns))

res_dir = 'results'

params1 = [1, 2]
params2 = ['a', 'b', 'c']

rule all:
    input:
        expand(os.path.join(res_dir, '{sample}_p1_{param1}_p2_{param2}.csv'),
               sample=sample_pfxes, param1=params1, param2=params2)

rule preprocess:
    input:
        csv=os.path.join(data_dir, '{sample}.csv')

    output:
        csv=os.path.join(res_dir, '{sample}_p1_{param1}_p2_{param2}.csv')

    shell:
        "ls {input.csv} && \
           echo P1: {wildcards.param1}, P2: {wildcards.param2} > {output.csv}"

运行前的目录结构snakemake

$ tree .
.
├── Snakefile
├── data
│   ├── sample_1.csv
│   ├── sample_2.csv
│   └── sample_3.csv
└── results

运行snakemake

$ snakemake -p
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
    count   jobs
    1   all
    18  preprocess
    19

rule preprocess:
    input: data/sample_1.csv
    output: results/sample_1_p1_2_p2_a.csv
    jobid: 1
    wildcards: param2=a, sample=sample_1, param1=2

ls data/sample_1.csv &&          echo P1: 2, P2: a > results/sample_1_p1_2_p2_a.csv
data/sample_1.csv
Finished job 1.
1 of 19 steps (5%) done

rule preprocess:
    input: data/sample_2.csv
    output: results/sample_2_p1_2_p2_a.csv
    jobid: 2
    wildcards: param2=a, sample=sample_2, param1=2

ls data/sample_2.csv &&          echo P1: 2, P2: a > results/sample_2_p1_2_p2_a.csv
data/sample_2.csv
Finished job 2.
2 of 19 steps (11%) done

...

localrule all:
    input: results/sample_1_p1_1_p2_a.csv, results/sample_1_p1_2_p2_a.csv, results/sample_2_p1_1_p2_a.csv, results/sample_2_p1_2_p2_a.csv, results/sample_3_p1_1_p2_a.csv, results/sample_3_p1_2_p2_a.csv, results/sample_1_p1_1_p2_b.csv, results/sample_1_p1_2_p2_b.csv, results/sample_2_p1_1_p2_b.csv, results/sample_2_p1_2_p2_b.csv, results/sample_3_p1_1_p2_b.csv, results/sample_3_p1_2_p2_b.csv, results/sample_1_p1_1_p2_c.csv, results/sample_1_p1_2_p2_c.csv, results/sample_2_p1_1_p2_c.csv, results/sample_2_p1_2_p2_c.csv, results/sample_3_p1_1_p2_c.csv, results/sample_3_p1_2_p2_c.csv
    jobid: 0

Finished job 0.
19 of 19 steps (100%) done

运行后的目录结构snakemake

$ tree .                                                                                                                                       [18:51:12]
.
├── Snakefile
├── data
│   ├── sample_1.csv
│   ├── sample_2.csv
│   └── sample_3.csv
└── results
    ├── sample_1_p1_1_p2_a.csv
    ├── sample_1_p1_1_p2_b.csv
    ├── sample_1_p1_1_p2_c.csv
    ├── sample_1_p1_2_p2_a.csv
    ├── sample_1_p1_2_p2_b.csv
    ├── sample_1_p1_2_p2_c.csv
    ├── sample_2_p1_1_p2_a.csv
    ├── sample_2_p1_1_p2_b.csv
    ├── sample_2_p1_1_p2_c.csv
    ├── sample_2_p1_2_p2_a.csv
    ├── sample_2_p1_2_p2_b.csv
    ├── sample_2_p1_2_p2_c.csv
    ├── sample_3_p1_1_p2_a.csv
    ├── sample_3_p1_1_p2_b.csv
    ├── sample_3_p1_1_p2_c.csv
    ├── sample_3_p1_2_p2_a.csv
    ├── sample_3_p1_2_p2_b.csv
    └── sample_3_p1_2_p2_c.csv

样本结果:

$ cat results/sample_2_p1_1_p2_a.csv                                                                                                          [19:12:36]
P1: 1, P2: a
于 2018-06-05T23:18:32.590 回答