-1

我有一个包含两个类的脚本。(我显然删除了很多我认为与我正在处理的错误无关的内容。)正如我在这个问题中提到的,最终的任务是创建一个决策树。

不幸的是,我遇到了一个无限循环,我很难确定原因。我已经确定了出错的代码行,但我会认为迭代器和我要添加到的列表将是不同的对象。list 的 .append 功能是否有一些我不知道的副作用?还是我犯了其他一些明显的错误?

class Dataset:
    individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys
    def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals
    def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value
    def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value
    def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most
    def __init__(self, individuals=[]):

class Node:
    ds = Dataset() #The data that is associated with this Node
    links = [] #List of Nodes, the offspring Nodes of this node
    level = 0 #Tree depth of this Node
    split_value = '' #Field used to split out this Node from the parent node
    node_value = '' #Value used to split out this Node from the parent Node

    def split_dataset(self, split_value): #Splits the dataset into a series of smaller datasets, each of which has a unique value for split_value.  Then creates subnodes to store these datasets.
        fields = [] #List of options for split_value amongst the individuals
        datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key
        for field in self.ds.field_set()[split_value]: #Populates the keys of fields[]
            fields.append(field)
            datasets[field] = Dataset()
        for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value
            datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit
        for field in fields: #Creates subnodes from each of the datasets.Dataset options
            self.add_subnode(datasets[field],split_value,field)

    def add_subnode(self, dataset, split_value='', node_value=''):
    def __init__(self, level, dataset=Dataset()):

我的初始化代码目前是:

if __name__ == '__main__':
    filename = (sys.argv[1]) #Takes in a CSV file
    predicted_value = "# class" #Identifies the field from the CSV file that should be predicted
    base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists
    parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries
    root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset
    root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes
    n = root.links[0] 
    n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.
4

2 回答 2

4

我怀疑您正在附加到您正在迭代的同一个列表中,导致它在迭代器到达末尾之前增加了大小。尝试迭代列表的副本:

for i in list(self.ds.individuals):
    datasets[i[split_value]].individuals.append(i) 
于 2010-05-13T23:34:12.793 回答
4
class Dataset:
    individuals = []

可疑的。除非你想让你的所有实例共享一个静态成员列表,Dataset否则不应该这样做。如果你在 中设置self.individuals= something__init__那么你也不需要在individuals这里设置。

    def __init__(self, individuals=[]):

还是很可疑。您是否将individuals参数分配给self.individuals? 如果是这样,您将individuals在函数定义时创建的相同列表分配给Dataset使用默认参数创建的每个列表。将一个项目添加到 oneDataset的列表中,并且所有其他没有显式individuals参数创建的项目也将获得该项目。

相似地:

class Node:
    def __init__(self, level, dataset=Dataset()):

All Node​s created without an explicit dataset argument will receive the exact same default Dataset instance.

This is the mutable default argument problem and the kind of destructive-iterations it would produce would seem very likely to be causing your infinite loop.

于 2010-05-13T23:55:52.197 回答