0

周末我尝试构建一个神经网络,它使用进化算法进行了改进。我在 openai( https://www.openai.com/ )的 Cartpole 环境中运行了 5000 代,但它并没有得到很好的改进。神经网络有 4 个输入,1 个带有 3 个单元的隐藏层,1 个输出,网络使用 tanH 作为激活函数。每一代有100个个体,其中5个被选为下一代,有20%的变异几率。这是更好理解的代码:

import operator
import gym
import math
import random
import numpy
import matplotlib.pyplot as plt

env = gym.make('CartPole-v0')

generations = 100
input_units = 4
Hidden_units = 3
output_units = 1
individuals = 100

fitest1 = []
fitest2 = []

def Neural_Network(x, weights1, weights2):
    global output
    output = list(map(operator.mul, x, weights1))
    output = numpy.tanh(output)
    output = list(map(operator.mul, output, weights2))
    output = sum(output)
    return(output)

weights1 = [[random.random() for i in range(input_units*Hidden_units)] for j in range(individuals)]
weights2 = [[random.random() for i in range(Hidden_units*output_units)] for j in range(individuals)]

fit_plot = []

for g in range(generations):
    print('generation:',g+1)
    fitness=[0 for f in range(individuals)]
    prev_obs = []
    observation = env.reset()
    for w in weights1:
        print('        individual ',weights1.index(w)+1, ' of ', len(weights1))
        env.reset()
        for t in range(500):
            #env.render()
            Neural_Network(observation, weights1[weights1.index(w)], weights2[weights1.index(w)])
            action = output < 0.5
            observation, reward, done, info = env.step(action)
            fitness[weights1.index(w)]+=reward
            if done:
                break
        print('        individual fitness:', fitness[weights1.index(w)])
    print('min fitness:', min(fitness))
    print('max fitness:', max(fitness))
    print('average fitness:', sum(fitness)/len(fitness))
    fit_plot.append(sum(fitness)/len(fitness))
    for f in range(10):
        fitest1.append(weights1[fitness.index(max(fitness))])
        fitest2.append(weights2[fitness.index(max(fitness))])
        fitness[fitness.index(max(fitness))] = -1000000000


    for x in range(len(weights1)):
        for y in range(len(weights1[x])):
            weights1[x][y]=random.choice(fitest1)[y]
            if random.randint(1,5) == 1:
                weights1[random.randint(0, len(weights1)-1)][random.randint(0, len(weights1[0])-1)] += random.choice([0.1, -0.1])

    for x in range(len(weights2)):
        for y in range(len(weights2[x])):
            weights2[x][y]=random.choice(fitest2)[y]
            if random.randint(1,5) == 1:
                weights1[random.randint(0, len(weights1)-1)][random.randint(0, len(weights1[0])-1)] += random.choice([0.1, -0.1])

plt.axis([0,generations,0,100])
plt.ylabel('fitness')
plt.xlabel('generations')
plt.plot(range(0,generations), fit_plot)
plt.show()

env.reset()
for t in range(100):
    env.render()
    Neural_Network(observation, fitest1[0], fitest2[0])
    action = output < 0.5
    observation, reward, done, info = env.step(action)
    if done:
        break

如果有人想知道,几代人的平均适应度图(这次我只运行了 100 代)如您所见,算法没有改进

如果还有任何问题,请问。

4

2 回答 2

0

突变的可能性似乎非常高,有 20%。尝试将其降低到 1-5%,到目前为止,这通常会从我的实验中产生更好的结果。

于 2020-05-21T11:07:36.020 回答
0

我的观点是,在进化算法中,你没有在 EA 结束时选择正确的个体。确保您为新一代选择了最好的 2 个人(可以只与一个人合作,但我们希望做得更好:))。这应该会改善预期的结果:)

于 2018-01-10T11:44:59.110 回答