0

我有一个snakemake运行HDBSCAN集群的规则。以前它是正常DBSCAN的并且工作正常,但是在我修改它之后,不知何故问题开始了(我也Snakemake因为其他原因修改了文件,所以很难说是什么原因造成的)。因此,当只运行一个文件HDBSCAN并生成结果时,我开始看到这样的图片。它没有给出错误,只是下一个规则说他们正在等待丢失的文件(不是由运行的规则生成的HDBSCAN)。以下是Snakemake文件相关部分的外观:

configfile: "config.yml"

samples,=glob_wildcards('data_files/normalized/{sample}.hdf5')
rule all:
    input:
        expand('results/tsne/{sample}_tsne.csv', sample=samples),
        expand('results/umap/{sample}_umap.csv', sample=samples),
        expand('results/umap/img/{sample}_umap.png', sample=samples),
        expand('results/tsne/img/{sample}_tsne.png', sample=samples),
        expand('results/clusters/umap/{sample}_umap_clusters.csv', sample=samples),
        expand('results/clusters/tsne/{sample}_tsne_clusters.csv', sample=samples),
        expand('results/neo4j/{sample}/{file}', sample=samples,
          file=['cells.csv', 'genes.csv', 'cl_contains.csv', 'cl_isin.csv', 'cl_nodes.csv', 'expr_by.csv', 'expr_ess.csv']),
        'results/neo4j/db_command'

rule cluster:
    input:
        script = 'python/dbscan.py',
        umap   = 'results/umap/{sample}_umap.csv'
    output:
        umap = 'results/umap/img/{sample}_umap.png',
        clusters_umap = 'results/clusters/umap/{sample}_umap_clusters.csv'
    shell:
        "python {input.script} -umap_data {input.umap} -min_cluster_size {config[dbscan][min_cluster_size]} -img_umap {output.umap} -clusters_umap {output.clusters_umap}"

以下是dbscan.py外观:

import numpy as np
import matplotlib.pyplot as plt
plt.switch_backend('agg')
from hdbscan import HDBSCAN
import pathlib
import os
import nice_service as ns

def run_dbscan(args):
    print('running HDBSCAN')

    path_to_img = args['-img_umap']
    path_to_clusters = args['-clusters_umap']
    path_to_data = args['-umap_data']

    # If folders in paths do not exist, create them
    for path_to_save in path_to_img:
        img_dir = os.path.dirname(path_to_save)
        pathlib.Path(img_dir).mkdir(parents=True, exist_ok=True) 

    for path_to_save in path_to_clusters:
        cluster_dir = os.path.dirname(path_to_save)
        pathlib.Path(cluster_dir).mkdir(parents=True, exist_ok=True) 

    #for idx, path_to_data in enumerate(data_arr):
    data = np.loadtxt(open(path_to_data, "rb"), delimiter=",")
    db = HDBSCAN(min_cluster_size=int(args['-min_cluster_size'])).fit(data)

    # 'TRUE' where the point was assigned to cluster, 'FALSE' where not assigned
    # aka 'noise'
    core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
    core_samples_mask[db.labels_ != -1] = True
    labels = db.labels_

    # Number of clusters in labels, ignoring noise if present.
    n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
    print('Estimated number of clusters: %d' % n_clusters_)
    unique_labels = set(labels)
    colors = [plt.cm.Spectral(each)
          for each in np.linspace(0, 1, len(unique_labels))]
    for k, col in zip(unique_labels, colors):
        if k == -1:
            # Black used for noise.
            col = [0, 0, 0, 1]
        class_member_mask = (labels == k)
        xy = data[class_member_mask & core_samples_mask]
        plt.plot(xy[:, 0], xy[:, 1], '.', color=tuple(col), markersize=1)
            #plt.legend()

    plt.title('Estimated number of clusters: %d' % n_clusters_)
    plt.savefig(path_to_img, dpi = 500)
    np.savetxt(path_to_clusters, labels.astype(int), fmt='%i', delimiter=",")
    print('Finished running HDBSCAN algorithm')

if __name__ == '__main__':
    from sys import argv
    myargs = ns.getopts(argv)
    print(myargs)
    run_dbscan(myargs)

的输入文件rule cluster都存在并且是正确的。不知何故,除了一个之外,所有其他文件都跳过了该规则。

4

1 回答 1

0

问题原来是在最后一条规则的脚本中我忘记输出一个文件。它生成了 6 个文件而不是 7 个。这对我来说是一种误导,因为它snakemake没有为一个规则运行所有文件,然后为下一个规则运行,但只为所有规则运行一个文件,然后卡住了。

于 2019-02-09T20:46:37.713 回答