2

我正在使用 graphSAGE 解决节点分类问题。我是 GNN 的新手,所以我的代码基于 GraphSAGE 和 DGL 的分类任务[1][2]的教程。这是我正在使用的代码,它是一个 3 层 GNN,输入大小为 20,输出大小为 2(二进制分类问题):

class GraphSAGE(nn.Module):
    def __init__(self,in_feats,n_hidden,n_classes,n_layers,
                 activation,dropout,aggregator_type):
        super(GraphSAGE, self).__init__()
        self.layers = nn.ModuleList()
        self.dropout = nn.Dropout(dropout)
        self.activation = activation

        self.layers.append(dglnn.SAGEConv(in_feats, n_hidden, aggregator_type))
        for i in range(n_layers - 1):
            self.layers.append(dglnn.SAGEConv(n_hidden, n_hidden, aggregator_type))
        self.layers.append(dglnn.SAGEConv(n_hidden, n_classes, aggregator_type))

    def forward(self, graph, inputs):
        h = self.dropout(inputs)
        for l, layer in enumerate(self.layers):
            h = layer(graph, h)
            if l != len(self.layers) - 1:
                h = self.activation(h)
                h = self.dropout(h)
        return h

modelG = GraphSAGE(in_feats=n_features, #20
                   n_hidden=16,
                   n_classes=n_labels, #2
                   n_layers=3,
                   activation=F.relu,
                   dropout=0,
                   aggregator_type='mean')

opt = torch.optim.Adam(modelG.parameters())

for epoch in range(50):
    modelG.train() 

    logits = modelG(g, node_features)
    
    loss = F.cross_entropy(logits[train_mask], node_labels[train_mask])
    
    acc = evaluate(modelG, g, node_features, node_labels, valid_mask)
    
    opt.zero_grad()
    loss.backward()
    opt.step()
    
    if epoch % 5 == 0:
        print('In epoch {}, loss: {}'.format(epoch, loss),)

每次我训练模型(不做任何改变),性能变化很大,准确率在 0.45 和 0.87 之间变化。如何保证结果的重现性?我尝试设置 pytorch seed torch.manual_seed(), numpy seed 并将 drop out 设置为 0 但结果不断变化。这是正常的还是我错过了什么?

4

0 回答 0