我目前正在为 python 3.x 寻找一个成熟的 GA 库。但唯一能找到的 GA 库是pyevolve
和pygene
. 它们都只支持 python 2.x。如果有人可以提供帮助,我将不胜感激。
6 回答
DEAP:分布式进化算法同时支持 Python 2 和 3: http ://code.google.com/p/deap
免责声明:我是 DEAP 的开发人员之一。
不完全是一个 GA 库,但 Clinton Sheppard 的“Python 的遗传算法”一书非常有用,因为它可以帮助您构建自己的 GA 库,并根据您的需要指定。
检查PyGAD,一个用于实现遗传算法和训练机器学习算法的开源 Python 3 库。
该文档可在阅读文档中获得:https ://pygad.readthedocs.io
通过 pip 安装它:pip install pygad
这是一个使用 PyGAD 优化线性模型的示例。
import pygad
import numpy
"""
Given the following function:
y = f(w1:w6) = w1x1 + w2x2 + w3x3 + w4x4 + w5x5 + 6wx6
where (x1,x2,x3,x4,x5,x6)=(4,-2,3.5,5,-11,-4.7) and y=44
What are the best values for the 6 weights (w1 to w6)? We are going to use the genetic algorithm to optimize this function.
"""
function_inputs = [4,-2,3.5,5,-11,-4.7] # Function inputs.
desired_output = 44 # Function output.
def fitness_func(solution, solution_idx):
# Calculating the fitness value of each solution in the current population.
# The fitness function calulates the sum of products between each input and its corresponding weight.
output = numpy.sum(solution*function_inputs)
fitness = 1.0 / numpy.abs(output - desired_output)
return fitness
fitness_function = fitness_func
num_generations = 100 # Number of generations.
num_parents_mating = 7 # Number of solutions to be selected as parents in the mating pool.
# To prepare the initial population, there are 2 ways:
# 1) Prepare it yourself and pass it to the initial_population parameter. This way is useful when the user wants to start the genetic algorithm with a custom initial population.
# 2) Assign valid integer values to the sol_per_pop and num_genes parameters. If the initial_population parameter exists, then the sol_per_pop and num_genes parameters are useless.
sol_per_pop = 50 # Number of solutions in the population.
num_genes = len(function_inputs)
init_range_low = -2
init_range_high = 5
parent_selection_type = "sss" # Type of parent selection.
keep_parents = 7 # Number of parents to keep in the next population. -1 means keep all parents and 0 means keep nothing.
crossover_type = "single_point" # Type of the crossover operator.
# Parameters of the mutation operation.
mutation_type = "random" # Type of the mutation operator.
mutation_percent_genes = 10 # Percentage of genes to mutate. This parameter has no action if the parameter mutation_num_genes exists or when mutation_type is None.
last_fitness = 0
def callback_generation(ga_instance):
global last_fitness
print("Generation = {generation}".format(generation=ga_instance.generations_completed))
print("Fitness = {fitness}".format(fitness=ga_instance.best_solution()[1]))
print("Change = {change}".format(change=ga_instance.best_solution()[1] - last_fitness))
# Creating an instance of the GA class inside the ga module. Some parameters are initialized within the constructor.
ga_instance = pygad.GA(num_generations=num_generations,
num_parents_mating=num_parents_mating,
fitness_func=fitness_function,
sol_per_pop=sol_per_pop,
num_genes=num_genes,
init_range_low=init_range_low,
init_range_high=init_range_high,
parent_selection_type=parent_selection_type,
keep_parents=keep_parents,
crossover_type=crossover_type,
mutation_type=mutation_type,
mutation_percent_genes=mutation_percent_genes,
callback_generation=callback_generation)
# Running the GA to optimize the parameters of the function.
ga_instance.run()
# After the generations complete, some plots are showed that summarize the how the outputs/fitenss values evolve over generations.
ga_instance.plot_result()
# Returning the details of the best solution.
solution, solution_fitness, solution_idx = ga_instance.best_solution()
print("Parameters of the best solution : {solution}".format(solution=solution))
print("Fitness value of the best solution = {solution_fitness}".format(solution_fitness=solution_fitness))
print("Index of the best solution : {solution_idx}".format(solution_idx=solution_idx))
prediction = numpy.sum(numpy.array(function_inputs)*solution)
print("Predicted output based on the best solution : {prediction}".format(prediction=prediction))
if ga_instance.best_solution_generation != -1:
print("Best fitness value reached after {best_solution_generation} generations.".format(best_solution_generation=ga_instance.best_solution_generation))
# Saving the GA instance.
filename = 'genetic' # The filename to which the instance is saved. The name is without extension.
ga_instance.save(filename=filename)
# Loading the saved GA instance.
loaded_ga_instance = pygad.load(filename=filename)
loaded_ga_instance.plot_result()
这是一个不需要组装的包,可以用于任何问题:
scikit-opt
https://github.com/guofei9987/scikit-opt
import numpy as np
def schaffer(p):
'''
This function has plenty of local minimum, with strong shocks
global minimum at (0,0) with value 0
https://en.wikipedia.org/wiki/Test_functions_for_optimization
'''
x1, x2 = p
part1 = np.square(x1) - np.square(x2)
part2 = np.square(x1) + np.square(x2)
return 0.5 + (np.square(np.sin(part1)) - 0.5) / np.square(1 + 0.001 * part2)
Step2:做遗传算法
from sko.GA import GA
ga = GA(func=schaffer, n_dim=2, size_pop=50, max_iter=800, prob_mut=0.001, lb=[-1, -1], ub=[1, 1], precision=1e-7)
best_x, best_y = ga.run()
print('best_x:', best_x, '\n', 'best_y:', best_y)
嘿,这基本上是一个插头,但我想你们会喜欢的!
EasyGA 是一个 python 包,旨在提供易于使用的遗传算法。该软件包旨在开箱即用,同时还允许您根据需要自定义功能。
这是 wiki,它在解释流程如何工作方面做得最好。 https://github.com/danielwilczak101/EasyGA/wiki
这就是让它工作所需的一切:
pip3 install EasyGA
和一些示例代码:
import EasyGA
# Create the Genetic algorithm
ga = EasyGA.GA()
# Evolve the genetic algorithm
ga.evolve()
# Print your default genetic algorithm
ga.print_generation()
ga.print_population()